May 13 23:57:13.933054 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 22:08:35 -00 2025 May 13 23:57:13.933099 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:57:13.933112 kernel: BIOS-provided physical RAM map: May 13 23:57:13.933119 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 23:57:13.933125 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable May 13 23:57:13.933132 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 13 23:57:13.933140 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 13 23:57:13.933147 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 13 23:57:13.933154 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable May 13 23:57:13.933161 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 13 23:57:13.933170 kernel: NX (Execute Disable) protection: active May 13 23:57:13.933177 kernel: APIC: Static calls initialized May 13 23:57:13.933184 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable May 13 23:57:13.933191 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable May 13 23:57:13.933200 kernel: extended physical RAM map: May 13 23:57:13.933207 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 23:57:13.933218 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable May 13 23:57:13.933225 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable May 13 23:57:13.933233 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable May 13 23:57:13.933240 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 13 23:57:13.933248 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 13 23:57:13.933266 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 13 23:57:13.933274 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable May 13 23:57:13.933282 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 13 23:57:13.933290 kernel: efi: EFI v2.7 by EDK II May 13 23:57:13.933297 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 May 13 23:57:13.933308 kernel: secureboot: Secure boot disabled May 13 23:57:13.933316 kernel: SMBIOS 2.7 present. May 13 23:57:13.933323 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 May 13 23:57:13.933331 kernel: Hypervisor detected: KVM May 13 23:57:13.933338 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 23:57:13.934380 kernel: kvm-clock: using sched offset of 3698333247 cycles May 13 23:57:13.934393 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 23:57:13.934402 kernel: tsc: Detected 2499.994 MHz processor May 13 23:57:13.934410 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 23:57:13.934418 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 23:57:13.934426 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 May 13 23:57:13.934438 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 13 23:57:13.934446 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 23:57:13.934454 kernel: Using GB pages for direct mapping May 13 23:57:13.934466 kernel: ACPI: Early table checksum verification disabled May 13 23:57:13.934475 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) May 13 23:57:13.934483 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) May 13 23:57:13.934494 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) May 13 23:57:13.934509 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) May 13 23:57:13.934517 kernel: ACPI: FACS 0x00000000789D0000 000040 May 13 23:57:13.934526 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) May 13 23:57:13.934534 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 13 23:57:13.934542 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 13 23:57:13.934550 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) May 13 23:57:13.934559 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) May 13 23:57:13.934570 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 13 23:57:13.934578 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 13 23:57:13.934587 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) May 13 23:57:13.934595 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] May 13 23:57:13.934603 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] May 13 23:57:13.934611 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] May 13 23:57:13.934619 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] May 13 23:57:13.934628 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] May 13 23:57:13.934636 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] May 13 23:57:13.934647 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] May 13 23:57:13.934655 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] May 13 23:57:13.934663 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] May 13 23:57:13.934671 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] May 13 23:57:13.934679 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] May 13 23:57:13.934688 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 13 23:57:13.934696 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 13 23:57:13.934704 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] May 13 23:57:13.934713 kernel: NUMA: Initialized distance table, cnt=1 May 13 23:57:13.934723 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] May 13 23:57:13.934732 kernel: Zone ranges: May 13 23:57:13.934740 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 23:57:13.934748 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] May 13 23:57:13.934756 kernel: Normal empty May 13 23:57:13.934764 kernel: Movable zone start for each node May 13 23:57:13.934772 kernel: Early memory node ranges May 13 23:57:13.934781 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 13 23:57:13.934789 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] May 13 23:57:13.934800 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] May 13 23:57:13.934808 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] May 13 23:57:13.934816 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 23:57:13.934824 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 13 23:57:13.934832 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 13 23:57:13.934840 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges May 13 23:57:13.934849 kernel: ACPI: PM-Timer IO Port: 0xb008 May 13 23:57:13.934857 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 23:57:13.934865 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 May 13 23:57:13.934873 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 23:57:13.934884 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 23:57:13.934893 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 23:57:13.934901 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 23:57:13.934909 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 23:57:13.934917 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 23:57:13.934925 kernel: TSC deadline timer available May 13 23:57:13.934933 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 13 23:57:13.934942 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 23:57:13.934950 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices May 13 23:57:13.934961 kernel: Booting paravirtualized kernel on KVM May 13 23:57:13.934969 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 23:57:13.934978 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 13 23:57:13.934986 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 13 23:57:13.934994 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 13 23:57:13.935002 kernel: pcpu-alloc: [0] 0 1 May 13 23:57:13.935011 kernel: kvm-guest: PV spinlocks enabled May 13 23:57:13.935019 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 23:57:13.935029 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:57:13.935040 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:57:13.935048 kernel: random: crng init done May 13 23:57:13.935057 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:57:13.935065 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 13 23:57:13.935073 kernel: Fallback order for Node 0: 0 May 13 23:57:13.935081 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 May 13 23:57:13.935089 kernel: Policy zone: DMA32 May 13 23:57:13.935098 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:57:13.935109 kernel: Memory: 1870488K/2037804K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 167060K reserved, 0K cma-reserved) May 13 23:57:13.935117 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 23:57:13.935125 kernel: Kernel/User page tables isolation: enabled May 13 23:57:13.935134 kernel: ftrace: allocating 37993 entries in 149 pages May 13 23:57:13.935151 kernel: ftrace: allocated 149 pages with 4 groups May 13 23:57:13.935163 kernel: Dynamic Preempt: voluntary May 13 23:57:13.935171 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:57:13.935181 kernel: rcu: RCU event tracing is enabled. May 13 23:57:13.935190 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 23:57:13.935199 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:57:13.935208 kernel: Rude variant of Tasks RCU enabled. May 13 23:57:13.935219 kernel: Tracing variant of Tasks RCU enabled. May 13 23:57:13.935229 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:57:13.935237 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 23:57:13.935246 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 13 23:57:13.935255 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:57:13.935266 kernel: Console: colour dummy device 80x25 May 13 23:57:13.935275 kernel: printk: console [tty0] enabled May 13 23:57:13.935284 kernel: printk: console [ttyS0] enabled May 13 23:57:13.935293 kernel: ACPI: Core revision 20230628 May 13 23:57:13.935302 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns May 13 23:57:13.935311 kernel: APIC: Switch to symmetric I/O mode setup May 13 23:57:13.935319 kernel: x2apic enabled May 13 23:57:13.935328 kernel: APIC: Switched APIC routing to: physical x2apic May 13 23:57:13.935337 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns May 13 23:57:13.935358 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) May 13 23:57:13.936409 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 13 23:57:13.936420 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 13 23:57:13.936430 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 23:57:13.936440 kernel: Spectre V2 : Mitigation: Retpolines May 13 23:57:13.936448 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 23:57:13.936457 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 13 23:57:13.936466 kernel: RETBleed: Vulnerable May 13 23:57:13.936475 kernel: Speculative Store Bypass: Vulnerable May 13 23:57:13.936484 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode May 13 23:57:13.936507 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 13 23:57:13.936516 kernel: GDS: Unknown: Dependent on hypervisor status May 13 23:57:13.936525 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 23:57:13.936537 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 23:57:13.936545 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 23:57:13.936555 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 13 23:57:13.936563 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 13 23:57:13.936572 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 13 23:57:13.936581 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 13 23:57:13.936589 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 13 23:57:13.936599 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 13 23:57:13.936611 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 23:57:13.936620 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 13 23:57:13.936628 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 13 23:57:13.936637 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 May 13 23:57:13.936646 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 May 13 23:57:13.936654 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 May 13 23:57:13.936663 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 May 13 23:57:13.936671 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. May 13 23:57:13.936680 kernel: Freeing SMP alternatives memory: 32K May 13 23:57:13.936689 kernel: pid_max: default: 32768 minimum: 301 May 13 23:57:13.936697 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:57:13.936706 kernel: landlock: Up and running. May 13 23:57:13.936718 kernel: SELinux: Initializing. May 13 23:57:13.936726 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 23:57:13.936735 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 23:57:13.936744 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) May 13 23:57:13.936753 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:57:13.936762 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:57:13.936771 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:57:13.936781 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 13 23:57:13.936789 kernel: signal: max sigframe size: 3632 May 13 23:57:13.936801 kernel: rcu: Hierarchical SRCU implementation. May 13 23:57:13.936811 kernel: rcu: Max phase no-delay instances is 400. May 13 23:57:13.936819 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 13 23:57:13.936828 kernel: smp: Bringing up secondary CPUs ... May 13 23:57:13.936837 kernel: smpboot: x86: Booting SMP configuration: May 13 23:57:13.936845 kernel: .... node #0, CPUs: #1 May 13 23:57:13.936855 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 13 23:57:13.936865 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 13 23:57:13.936873 kernel: smp: Brought up 1 node, 2 CPUs May 13 23:57:13.936885 kernel: smpboot: Max logical packages: 1 May 13 23:57:13.936893 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) May 13 23:57:13.936902 kernel: devtmpfs: initialized May 13 23:57:13.936911 kernel: x86/mm: Memory block size: 128MB May 13 23:57:13.936919 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) May 13 23:57:13.936928 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:57:13.936937 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 23:57:13.936946 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:57:13.936955 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:57:13.936966 kernel: audit: initializing netlink subsys (disabled) May 13 23:57:13.936975 kernel: audit: type=2000 audit(1747180634.617:1): state=initialized audit_enabled=0 res=1 May 13 23:57:13.936984 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:57:13.936992 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 23:57:13.937001 kernel: cpuidle: using governor menu May 13 23:57:13.937010 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:57:13.937019 kernel: dca service started, version 1.12.1 May 13 23:57:13.937028 kernel: PCI: Using configuration type 1 for base access May 13 23:57:13.937036 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 23:57:13.937048 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:57:13.937071 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:57:13.937080 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:57:13.937089 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:57:13.937098 kernel: ACPI: Added _OSI(Module Device) May 13 23:57:13.937107 kernel: ACPI: Added _OSI(Processor Device) May 13 23:57:13.937116 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:57:13.937124 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:57:13.937133 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 13 23:57:13.937145 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 23:57:13.937158 kernel: ACPI: Interpreter enabled May 13 23:57:13.937167 kernel: ACPI: PM: (supports S0 S5) May 13 23:57:13.937175 kernel: ACPI: Using IOAPIC for interrupt routing May 13 23:57:13.937185 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 23:57:13.937193 kernel: PCI: Using E820 reservations for host bridge windows May 13 23:57:13.937202 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 13 23:57:13.937211 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 23:57:13.937735 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 13 23:57:13.938768 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 13 23:57:13.938869 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 13 23:57:13.938882 kernel: acpiphp: Slot [3] registered May 13 23:57:13.938891 kernel: acpiphp: Slot [4] registered May 13 23:57:13.938900 kernel: acpiphp: Slot [5] registered May 13 23:57:13.938909 kernel: acpiphp: Slot [6] registered May 13 23:57:13.938918 kernel: acpiphp: Slot [7] registered May 13 23:57:13.938931 kernel: acpiphp: Slot [8] registered May 13 23:57:13.938940 kernel: acpiphp: Slot [9] registered May 13 23:57:13.938949 kernel: acpiphp: Slot [10] registered May 13 23:57:13.938958 kernel: acpiphp: Slot [11] registered May 13 23:57:13.938967 kernel: acpiphp: Slot [12] registered May 13 23:57:13.938976 kernel: acpiphp: Slot [13] registered May 13 23:57:13.938985 kernel: acpiphp: Slot [14] registered May 13 23:57:13.938993 kernel: acpiphp: Slot [15] registered May 13 23:57:13.939002 kernel: acpiphp: Slot [16] registered May 13 23:57:13.939011 kernel: acpiphp: Slot [17] registered May 13 23:57:13.939022 kernel: acpiphp: Slot [18] registered May 13 23:57:13.939030 kernel: acpiphp: Slot [19] registered May 13 23:57:13.939039 kernel: acpiphp: Slot [20] registered May 13 23:57:13.939048 kernel: acpiphp: Slot [21] registered May 13 23:57:13.939057 kernel: acpiphp: Slot [22] registered May 13 23:57:13.939066 kernel: acpiphp: Slot [23] registered May 13 23:57:13.939074 kernel: acpiphp: Slot [24] registered May 13 23:57:13.939083 kernel: acpiphp: Slot [25] registered May 13 23:57:13.939091 kernel: acpiphp: Slot [26] registered May 13 23:57:13.939103 kernel: acpiphp: Slot [27] registered May 13 23:57:13.939111 kernel: acpiphp: Slot [28] registered May 13 23:57:13.939120 kernel: acpiphp: Slot [29] registered May 13 23:57:13.939128 kernel: acpiphp: Slot [30] registered May 13 23:57:13.939137 kernel: acpiphp: Slot [31] registered May 13 23:57:13.939146 kernel: PCI host bridge to bus 0000:00 May 13 23:57:13.939244 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 23:57:13.939330 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 23:57:13.939465 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 23:57:13.939549 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 13 23:57:13.939631 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] May 13 23:57:13.939713 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 23:57:13.939824 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 13 23:57:13.939927 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 13 23:57:13.940027 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 May 13 23:57:13.940126 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 13 23:57:13.940218 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff May 13 23:57:13.940310 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff May 13 23:57:13.940463 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff May 13 23:57:13.940556 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff May 13 23:57:13.940647 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff May 13 23:57:13.940739 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff May 13 23:57:13.940846 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 May 13 23:57:13.940940 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] May 13 23:57:13.941032 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 13 23:57:13.941131 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb May 13 23:57:13.941223 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 23:57:13.941334 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 13 23:57:13.942541 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] May 13 23:57:13.942647 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 13 23:57:13.942740 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] May 13 23:57:13.942753 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 23:57:13.942763 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 23:57:13.942772 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 23:57:13.942781 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 23:57:13.942790 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 13 23:57:13.942804 kernel: iommu: Default domain type: Translated May 13 23:57:13.942813 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 23:57:13.942822 kernel: efivars: Registered efivars operations May 13 23:57:13.942831 kernel: PCI: Using ACPI for IRQ routing May 13 23:57:13.942840 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 23:57:13.942849 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] May 13 23:57:13.942858 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] May 13 23:57:13.942866 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] May 13 23:57:13.942959 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device May 13 23:57:13.943052 kernel: pci 0000:00:03.0: vgaarb: bridge control possible May 13 23:57:13.943146 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 23:57:13.943158 kernel: vgaarb: loaded May 13 23:57:13.943167 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 May 13 23:57:13.943176 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter May 13 23:57:13.943186 kernel: clocksource: Switched to clocksource kvm-clock May 13 23:57:13.943194 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:57:13.943204 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:57:13.943212 kernel: pnp: PnP ACPI init May 13 23:57:13.943224 kernel: pnp: PnP ACPI: found 5 devices May 13 23:57:13.943233 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 23:57:13.943243 kernel: NET: Registered PF_INET protocol family May 13 23:57:13.943252 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:57:13.943260 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 13 23:57:13.943270 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:57:13.943279 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 13 23:57:13.943287 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 13 23:57:13.943296 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 13 23:57:13.943308 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 23:57:13.943317 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 23:57:13.943325 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:57:13.943334 kernel: NET: Registered PF_XDP protocol family May 13 23:57:13.945467 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 23:57:13.945565 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 23:57:13.945647 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 23:57:13.945729 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 13 23:57:13.945809 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] May 13 23:57:13.945919 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 23:57:13.945932 kernel: PCI: CLS 0 bytes, default 64 May 13 23:57:13.945941 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 13 23:57:13.945951 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns May 13 23:57:13.945960 kernel: clocksource: Switched to clocksource tsc May 13 23:57:13.945969 kernel: Initialise system trusted keyrings May 13 23:57:13.945978 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 13 23:57:13.945987 kernel: Key type asymmetric registered May 13 23:57:13.945999 kernel: Asymmetric key parser 'x509' registered May 13 23:57:13.946008 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 23:57:13.946018 kernel: io scheduler mq-deadline registered May 13 23:57:13.946026 kernel: io scheduler kyber registered May 13 23:57:13.946035 kernel: io scheduler bfq registered May 13 23:57:13.946044 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 23:57:13.946054 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:57:13.946063 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 23:57:13.946072 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 23:57:13.946083 kernel: i8042: Warning: Keylock active May 13 23:57:13.946092 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 23:57:13.946101 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 23:57:13.946208 kernel: rtc_cmos 00:00: RTC can wake from S4 May 13 23:57:13.946297 kernel: rtc_cmos 00:00: registered as rtc0 May 13 23:57:13.946422 kernel: rtc_cmos 00:00: setting system clock to 2025-05-13T23:57:13 UTC (1747180633) May 13 23:57:13.946508 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 13 23:57:13.946519 kernel: intel_pstate: CPU model not supported May 13 23:57:13.946532 kernel: efifb: probing for efifb May 13 23:57:13.946541 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k May 13 23:57:13.946550 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 13 23:57:13.946577 kernel: efifb: scrolling: redraw May 13 23:57:13.946589 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 13 23:57:13.946598 kernel: Console: switching to colour frame buffer device 100x37 May 13 23:57:13.946608 kernel: fb0: EFI VGA frame buffer device May 13 23:57:13.946617 kernel: pstore: Using crash dump compression: deflate May 13 23:57:13.946627 kernel: pstore: Registered efi_pstore as persistent store backend May 13 23:57:13.946638 kernel: NET: Registered PF_INET6 protocol family May 13 23:57:13.946648 kernel: Segment Routing with IPv6 May 13 23:57:13.946657 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:57:13.946666 kernel: NET: Registered PF_PACKET protocol family May 13 23:57:13.946676 kernel: Key type dns_resolver registered May 13 23:57:13.946685 kernel: IPI shorthand broadcast: enabled May 13 23:57:13.946695 kernel: sched_clock: Marking stable (472002012, 138263329)->(677214271, -66948930) May 13 23:57:13.946704 kernel: registered taskstats version 1 May 13 23:57:13.946714 kernel: Loading compiled-in X.509 certificates May 13 23:57:13.946726 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 166efda032ca4d6e9037c569aca9b53585ee6f94' May 13 23:57:13.946735 kernel: Key type .fscrypt registered May 13 23:57:13.946744 kernel: Key type fscrypt-provisioning registered May 13 23:57:13.946753 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:57:13.946763 kernel: ima: Allocated hash algorithm: sha1 May 13 23:57:13.946772 kernel: ima: No architecture policies found May 13 23:57:13.946781 kernel: clk: Disabling unused clocks May 13 23:57:13.946791 kernel: Freeing unused kernel image (initmem) memory: 43604K May 13 23:57:13.946800 kernel: Write protecting the kernel read-only data: 40960k May 13 23:57:13.946812 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 13 23:57:13.946821 kernel: Run /init as init process May 13 23:57:13.946831 kernel: with arguments: May 13 23:57:13.946840 kernel: /init May 13 23:57:13.946849 kernel: with environment: May 13 23:57:13.946859 kernel: HOME=/ May 13 23:57:13.946868 kernel: TERM=linux May 13 23:57:13.946877 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:57:13.946888 systemd[1]: Successfully made /usr/ read-only. May 13 23:57:13.946904 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:57:13.946914 systemd[1]: Detected virtualization amazon. May 13 23:57:13.946924 systemd[1]: Detected architecture x86-64. May 13 23:57:13.946933 systemd[1]: Running in initrd. May 13 23:57:13.946948 systemd[1]: No hostname configured, using default hostname. May 13 23:57:13.946958 systemd[1]: Hostname set to . May 13 23:57:13.946968 systemd[1]: Initializing machine ID from VM UUID. May 13 23:57:13.946978 systemd[1]: Queued start job for default target initrd.target. May 13 23:57:13.946987 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:57:13.946997 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:57:13.947008 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:57:13.947018 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:57:13.947031 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:57:13.947044 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:57:13.947055 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:57:13.947065 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:57:13.947075 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:57:13.947085 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:57:13.947097 systemd[1]: Reached target paths.target - Path Units. May 13 23:57:13.947107 systemd[1]: Reached target slices.target - Slice Units. May 13 23:57:13.947118 systemd[1]: Reached target swap.target - Swaps. May 13 23:57:13.947127 systemd[1]: Reached target timers.target - Timer Units. May 13 23:57:13.947137 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:57:13.947147 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:57:13.947157 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:57:13.947167 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:57:13.947177 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:57:13.947189 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:57:13.947199 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:57:13.947209 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:57:13.947220 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:57:13.947229 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:57:13.947240 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:57:13.947249 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:57:13.947259 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:57:13.947269 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:57:13.947282 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:57:13.947292 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:57:13.947324 systemd-journald[179]: Collecting audit messages is disabled. May 13 23:57:13.947366 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:57:13.947381 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:57:13.947391 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:13.947402 systemd-journald[179]: Journal started May 13 23:57:13.947428 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2154f1f4d00b93890218301ce5251a) is 4.7M, max 38.1M, 33.3M free. May 13 23:57:13.934543 systemd-modules-load[180]: Inserted module 'overlay' May 13 23:57:13.951403 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:57:13.954475 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:57:13.960014 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:57:13.963846 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:57:13.964385 kernel: Bridge firewalling registered May 13 23:57:13.963963 systemd-modules-load[180]: Inserted module 'br_netfilter' May 13 23:57:13.968753 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:57:13.970208 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:57:13.973219 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:57:13.976268 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:57:13.983836 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:57:13.988766 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:57:13.991487 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:57:13.995200 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:57:13.997869 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:57:14.006617 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 23:57:14.005985 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:57:14.007126 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:57:14.013379 dracut-cmdline[209]: dracut-dracut-053 May 13 23:57:14.019358 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:57:14.063904 systemd-resolved[218]: Positive Trust Anchors: May 13 23:57:14.064494 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:57:14.064562 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:57:14.074453 systemd-resolved[218]: Defaulting to hostname 'linux'. May 13 23:57:14.076954 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:57:14.077710 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:57:14.109391 kernel: SCSI subsystem initialized May 13 23:57:14.119383 kernel: Loading iSCSI transport class v2.0-870. May 13 23:57:14.130392 kernel: iscsi: registered transport (tcp) May 13 23:57:14.152868 kernel: iscsi: registered transport (qla4xxx) May 13 23:57:14.152954 kernel: QLogic iSCSI HBA Driver May 13 23:57:14.191246 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:57:14.193249 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:57:14.228947 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:57:14.229027 kernel: device-mapper: uevent: version 1.0.3 May 13 23:57:14.229049 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:57:14.272398 kernel: raid6: avx512x4 gen() 15493 MB/s May 13 23:57:14.290374 kernel: raid6: avx512x2 gen() 15394 MB/s May 13 23:57:14.308383 kernel: raid6: avx512x1 gen() 15347 MB/s May 13 23:57:14.326374 kernel: raid6: avx2x4 gen() 15235 MB/s May 13 23:57:14.344375 kernel: raid6: avx2x2 gen() 15344 MB/s May 13 23:57:14.362508 kernel: raid6: avx2x1 gen() 11715 MB/s May 13 23:57:14.362572 kernel: raid6: using algorithm avx512x4 gen() 15493 MB/s May 13 23:57:14.381551 kernel: raid6: .... xor() 7529 MB/s, rmw enabled May 13 23:57:14.381622 kernel: raid6: using avx512x2 recovery algorithm May 13 23:57:14.403387 kernel: xor: automatically using best checksumming function avx May 13 23:57:14.557385 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:57:14.567900 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:57:14.570000 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:57:14.596159 systemd-udevd[399]: Using default interface naming scheme 'v255'. May 13 23:57:14.602280 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:57:14.606271 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:57:14.631865 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation May 13 23:57:14.661594 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:57:14.663400 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:57:14.724097 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:57:14.728520 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:57:14.767646 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:57:14.769277 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:57:14.771579 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:57:14.772079 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:57:14.777008 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:57:14.814156 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:57:14.819370 kernel: cryptd: max_cpu_qlen set to 1000 May 13 23:57:14.835580 kernel: AVX2 version of gcm_enc/dec engaged. May 13 23:57:14.835651 kernel: AES CTR mode by8 optimization enabled May 13 23:57:14.857266 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 13 23:57:14.857586 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 13 23:57:14.865374 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. May 13 23:57:14.874706 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:57:14.875030 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:57:14.878569 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:57:14.879608 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:57:14.879919 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:14.882415 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:57:14.885776 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:57:14.892168 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:5d:36:4d:7a:c5 May 13 23:57:14.898120 kernel: nvme nvme0: pci function 0000:00:04.0 May 13 23:57:14.898420 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 13 23:57:14.894833 (udev-worker)[449]: Network interface NamePolicy= disabled on kernel command line. May 13 23:57:14.920385 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 13 23:57:14.927683 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:14.936516 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:57:14.936540 kernel: GPT:9289727 != 16777215 May 13 23:57:14.936553 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:57:14.936564 kernel: GPT:9289727 != 16777215 May 13 23:57:14.936575 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:57:14.936593 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 13 23:57:14.937007 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:57:14.965812 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:57:15.007251 kernel: BTRFS: device fsid d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (452) May 13 23:57:15.019091 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 13 23:57:15.031371 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (450) May 13 23:57:15.042684 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 13 23:57:15.044277 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 13 23:57:15.046067 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:57:15.065634 disk-uuid[624]: Primary Header is updated. May 13 23:57:15.065634 disk-uuid[624]: Secondary Entries is updated. May 13 23:57:15.065634 disk-uuid[624]: Secondary Header is updated. May 13 23:57:15.083374 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 13 23:57:15.112860 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 13 23:57:16.079280 disk-uuid[631]: The operation has completed successfully. May 13 23:57:16.080000 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 13 23:57:16.185716 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:57:16.185823 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:57:16.232658 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:57:16.245790 sh[892]: Success May 13 23:57:16.266368 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 13 23:57:16.379564 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:57:16.385459 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:57:16.395955 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:57:16.425537 kernel: BTRFS info (device dm-0): first mount of filesystem d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 May 13 23:57:16.425597 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 23:57:16.428653 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:57:16.428715 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:57:16.429948 kernel: BTRFS info (device dm-0): using free space tree May 13 23:57:16.456375 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 13 23:57:16.468373 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:57:16.470044 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:57:16.471026 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:57:16.473489 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:57:16.517598 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:57:16.517676 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 13 23:57:16.519465 kernel: BTRFS info (device nvme0n1p6): using free space tree May 13 23:57:16.527380 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 13 23:57:16.534451 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:57:16.537103 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:57:16.541700 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:57:16.575212 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:57:16.577761 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:57:16.628403 systemd-networkd[1081]: lo: Link UP May 13 23:57:16.628414 systemd-networkd[1081]: lo: Gained carrier May 13 23:57:16.630709 systemd-networkd[1081]: Enumeration completed May 13 23:57:16.631126 systemd-networkd[1081]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:57:16.631132 systemd-networkd[1081]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:57:16.632186 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:57:16.633633 systemd[1]: Reached target network.target - Network. May 13 23:57:16.634571 systemd-networkd[1081]: eth0: Link UP May 13 23:57:16.634576 systemd-networkd[1081]: eth0: Gained carrier May 13 23:57:16.634589 systemd-networkd[1081]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:57:16.646456 systemd-networkd[1081]: eth0: DHCPv4 address 172.31.16.70/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 13 23:57:16.715267 ignition[1036]: Ignition 2.20.0 May 13 23:57:16.715344 ignition[1036]: Stage: fetch-offline May 13 23:57:16.715593 ignition[1036]: no configs at "/usr/lib/ignition/base.d" May 13 23:57:16.715607 ignition[1036]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 13 23:57:16.716060 ignition[1036]: Ignition finished successfully May 13 23:57:16.718423 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:57:16.720269 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 13 23:57:16.743231 ignition[1093]: Ignition 2.20.0 May 13 23:57:16.743245 ignition[1093]: Stage: fetch May 13 23:57:16.743679 ignition[1093]: no configs at "/usr/lib/ignition/base.d" May 13 23:57:16.743692 ignition[1093]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 13 23:57:16.743824 ignition[1093]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 13 23:57:16.751870 ignition[1093]: PUT result: OK May 13 23:57:16.754193 ignition[1093]: parsed url from cmdline: "" May 13 23:57:16.754202 ignition[1093]: no config URL provided May 13 23:57:16.754211 ignition[1093]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:57:16.754224 ignition[1093]: no config at "/usr/lib/ignition/user.ign" May 13 23:57:16.754241 ignition[1093]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 13 23:57:16.755058 ignition[1093]: PUT result: OK May 13 23:57:16.755114 ignition[1093]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 13 23:57:16.755913 ignition[1093]: GET result: OK May 13 23:57:16.755975 ignition[1093]: parsing config with SHA512: 745f20bf2f02a2a32c6b3ccc5d385b6995c4d7c3e48e819c1ac2bf8a1f0aa4aeb0be08d10da9fcc7d7cd73409d2ec5d827996542e3a15292ff00534a78fd4a11 May 13 23:57:16.760265 unknown[1093]: fetched base config from "system" May 13 23:57:16.760400 unknown[1093]: fetched base config from "system" May 13 23:57:16.760772 ignition[1093]: fetch: fetch complete May 13 23:57:16.760407 unknown[1093]: fetched user config from "aws" May 13 23:57:16.760777 ignition[1093]: fetch: fetch passed May 13 23:57:16.760825 ignition[1093]: Ignition finished successfully May 13 23:57:16.762364 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 13 23:57:16.764139 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:57:16.791142 ignition[1099]: Ignition 2.20.0 May 13 23:57:16.791152 ignition[1099]: Stage: kargs May 13 23:57:16.791480 ignition[1099]: no configs at "/usr/lib/ignition/base.d" May 13 23:57:16.791489 ignition[1099]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 13 23:57:16.791577 ignition[1099]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 13 23:57:16.795271 ignition[1099]: PUT result: OK May 13 23:57:16.798140 ignition[1099]: kargs: kargs passed May 13 23:57:16.798200 ignition[1099]: Ignition finished successfully May 13 23:57:16.799208 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:57:16.800924 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:57:16.828930 ignition[1105]: Ignition 2.20.0 May 13 23:57:16.829068 ignition[1105]: Stage: disks May 13 23:57:16.829544 ignition[1105]: no configs at "/usr/lib/ignition/base.d" May 13 23:57:16.829555 ignition[1105]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 13 23:57:16.829639 ignition[1105]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 13 23:57:16.830528 ignition[1105]: PUT result: OK May 13 23:57:16.833190 ignition[1105]: disks: disks passed May 13 23:57:16.833308 ignition[1105]: Ignition finished successfully May 13 23:57:16.834398 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:57:16.835211 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:57:16.835614 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:57:16.836122 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:57:16.836696 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:57:16.837207 systemd[1]: Reached target basic.target - Basic System. May 13 23:57:16.838697 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:57:16.888828 systemd-fsck[1113]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:57:16.891249 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:57:16.893090 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:57:17.015719 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c413e98b-da35-46b1-9852-45706e1b1f52 r/w with ordered data mode. Quota mode: none. May 13 23:57:17.016597 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:57:17.017698 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:57:17.027149 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:57:17.030434 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:57:17.031559 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 23:57:17.031606 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:57:17.031630 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:57:17.044990 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:57:17.047085 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:57:17.061378 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1132) May 13 23:57:17.065460 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:57:17.065521 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 13 23:57:17.065535 kernel: BTRFS info (device nvme0n1p6): using free space tree May 13 23:57:17.102456 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 13 23:57:17.104816 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:57:17.127126 initrd-setup-root[1156]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:57:17.131724 initrd-setup-root[1163]: cut: /sysroot/etc/group: No such file or directory May 13 23:57:17.136049 initrd-setup-root[1170]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:57:17.141595 initrd-setup-root[1177]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:57:17.247587 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:57:17.249464 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:57:17.251492 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:57:17.265407 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:57:17.292238 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:57:17.297101 ignition[1244]: INFO : Ignition 2.20.0 May 13 23:57:17.297101 ignition[1244]: INFO : Stage: mount May 13 23:57:17.298811 ignition[1244]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:57:17.298811 ignition[1244]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 13 23:57:17.298811 ignition[1244]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 13 23:57:17.298811 ignition[1244]: INFO : PUT result: OK May 13 23:57:17.301438 ignition[1244]: INFO : mount: mount passed May 13 23:57:17.301973 ignition[1244]: INFO : Ignition finished successfully May 13 23:57:17.302759 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:57:17.304628 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:57:17.422418 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:57:17.423951 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:57:17.451374 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1257) May 13 23:57:17.455145 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:57:17.455214 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 13 23:57:17.455236 kernel: BTRFS info (device nvme0n1p6): using free space tree May 13 23:57:17.461380 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 13 23:57:17.463061 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:57:17.491496 ignition[1273]: INFO : Ignition 2.20.0 May 13 23:57:17.491496 ignition[1273]: INFO : Stage: files May 13 23:57:17.492938 ignition[1273]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:57:17.492938 ignition[1273]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 13 23:57:17.492938 ignition[1273]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 13 23:57:17.492938 ignition[1273]: INFO : PUT result: OK May 13 23:57:17.498685 ignition[1273]: DEBUG : files: compiled without relabeling support, skipping May 13 23:57:17.499990 ignition[1273]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:57:17.499990 ignition[1273]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:57:17.504382 ignition[1273]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:57:17.505070 ignition[1273]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:57:17.505070 ignition[1273]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:57:17.504825 unknown[1273]: wrote ssh authorized keys file for user: core May 13 23:57:17.506999 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 23:57:17.507583 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 23:57:17.662089 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:57:17.810311 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 23:57:17.810311 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 23:57:17.811999 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 13 23:57:17.914641 systemd-networkd[1081]: eth0: Gained IPv6LL May 13 23:57:18.357486 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 23:57:18.576276 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 23:57:18.578054 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 23:57:18.578054 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:57:18.578054 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:57:18.578054 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:57:18.578054 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:57:18.578054 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:57:18.578054 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:57:18.578054 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:57:18.578054 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:57:18.578054 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:57:18.578054 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 23:57:18.578054 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 23:57:18.578054 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 23:57:18.578054 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 13 23:57:19.053085 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 23:57:20.017329 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 23:57:20.017329 ignition[1273]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 23:57:20.019271 ignition[1273]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:57:20.020176 ignition[1273]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:57:20.020176 ignition[1273]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 23:57:20.020176 ignition[1273]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 13 23:57:20.020176 ignition[1273]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:57:20.020176 ignition[1273]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:57:20.020176 ignition[1273]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:57:20.020176 ignition[1273]: INFO : files: files passed May 13 23:57:20.020176 ignition[1273]: INFO : Ignition finished successfully May 13 23:57:20.020971 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:57:20.024572 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:57:20.027946 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:57:20.034556 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:57:20.035880 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:57:20.040699 initrd-setup-root-after-ignition[1304]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:57:20.040699 initrd-setup-root-after-ignition[1304]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:57:20.042738 initrd-setup-root-after-ignition[1308]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:57:20.044150 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:57:20.044999 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:57:20.046486 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:57:20.087867 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:57:20.087991 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:57:20.089129 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:57:20.090207 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:57:20.091036 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:57:20.092142 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:57:20.109658 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:57:20.111639 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:57:20.132302 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:57:20.132928 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:57:20.133901 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:57:20.134658 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:57:20.134782 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:57:20.135737 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:57:20.136523 systemd[1]: Stopped target basic.target - Basic System. May 13 23:57:20.137219 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:57:20.138031 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:57:20.138719 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:57:20.139403 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:57:20.140085 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:57:20.140767 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:57:20.141721 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:57:20.142371 systemd[1]: Stopped target swap.target - Swaps. May 13 23:57:20.143001 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:57:20.143120 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:57:20.144022 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:57:20.144738 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:57:20.145402 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:57:20.146103 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:57:20.146552 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:57:20.146662 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:57:20.147880 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:57:20.148026 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:57:20.148574 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:57:20.148674 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:57:20.150761 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:57:20.151114 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:57:20.151241 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:57:20.154518 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:57:20.154864 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:57:20.154987 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:57:20.155456 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:57:20.155549 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:57:20.163590 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:57:20.163681 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:57:20.176672 ignition[1328]: INFO : Ignition 2.20.0 May 13 23:57:20.177480 ignition[1328]: INFO : Stage: umount May 13 23:57:20.177928 ignition[1328]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:57:20.177928 ignition[1328]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 13 23:57:20.177928 ignition[1328]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 13 23:57:20.179508 ignition[1328]: INFO : PUT result: OK May 13 23:57:20.180147 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:57:20.182687 ignition[1328]: INFO : umount: umount passed May 13 23:57:20.184044 ignition[1328]: INFO : Ignition finished successfully May 13 23:57:20.184318 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:57:20.184435 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:57:20.184905 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:57:20.184954 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:57:20.185887 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:57:20.185936 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:57:20.186487 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 23:57:20.186529 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 13 23:57:20.187054 systemd[1]: Stopped target network.target - Network. May 13 23:57:20.187657 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:57:20.187702 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:57:20.188259 systemd[1]: Stopped target paths.target - Path Units. May 13 23:57:20.188897 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:57:20.192424 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:57:20.192811 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:57:20.193805 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:57:20.194394 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:57:20.194440 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:57:20.194980 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:57:20.195018 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:57:20.195578 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:57:20.195629 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:57:20.196188 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:57:20.196237 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:57:20.196938 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:57:20.197521 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:57:20.198679 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:57:20.198762 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:57:20.199785 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:57:20.199872 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:57:20.202798 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:57:20.202899 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:57:20.205648 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:57:20.205875 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:57:20.205970 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:57:20.207759 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:57:20.208591 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:57:20.208659 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:57:20.210104 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:57:20.210450 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:57:20.210497 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:57:20.210877 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:57:20.210916 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:57:20.211307 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:57:20.211343 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:57:20.211734 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:57:20.211771 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:57:20.212153 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:57:20.214498 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:57:20.214556 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:57:20.227691 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:57:20.227828 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:57:20.228892 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:57:20.228967 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:57:20.229700 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:57:20.229732 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:57:20.230469 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:57:20.230514 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:57:20.231867 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:57:20.231913 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:57:20.232697 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:57:20.232741 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:57:20.234460 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:57:20.235754 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:57:20.235810 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:57:20.237326 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:57:20.237381 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:20.239132 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:57:20.239188 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:57:20.239530 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:57:20.239611 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:57:20.249071 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:57:20.249171 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:57:20.250555 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:57:20.251912 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:57:20.267711 systemd[1]: Switching root. May 13 23:57:20.302098 systemd-journald[179]: Journal stopped May 13 23:57:21.524230 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). May 13 23:57:21.524294 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:57:21.524309 kernel: SELinux: policy capability open_perms=1 May 13 23:57:21.524321 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:57:21.524333 kernel: SELinux: policy capability always_check_network=0 May 13 23:57:21.524360 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:57:21.524376 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:57:21.524388 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:57:21.524399 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:57:21.524412 kernel: audit: type=1403 audit(1747180640.578:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:57:21.524427 systemd[1]: Successfully loaded SELinux policy in 41.318ms. May 13 23:57:21.524449 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.677ms. May 13 23:57:21.524467 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:57:21.524479 systemd[1]: Detected virtualization amazon. May 13 23:57:21.524492 systemd[1]: Detected architecture x86-64. May 13 23:57:21.524506 systemd[1]: Detected first boot. May 13 23:57:21.524519 systemd[1]: Initializing machine ID from VM UUID. May 13 23:57:21.524532 zram_generator::config[1374]: No configuration found. May 13 23:57:21.524545 kernel: Guest personality initialized and is inactive May 13 23:57:21.524557 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 23:57:21.524569 kernel: Initialized host personality May 13 23:57:21.524580 kernel: NET: Registered PF_VSOCK protocol family May 13 23:57:21.524596 systemd[1]: Populated /etc with preset unit settings. May 13 23:57:21.524609 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:57:21.524624 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:57:21.524636 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:57:21.524647 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:57:21.524659 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:57:21.524672 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:57:21.524684 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:57:21.524698 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:57:21.524715 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:57:21.524731 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:57:21.524743 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:57:21.524755 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:57:21.524768 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:57:21.524781 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:57:21.524794 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:57:21.524806 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:57:21.524819 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:57:21.524831 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:57:21.524846 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 23:57:21.524859 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:57:21.524871 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:57:21.524884 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:57:21.524896 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:57:21.524913 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:57:21.524925 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:57:21.524941 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:57:21.524953 systemd[1]: Reached target slices.target - Slice Units. May 13 23:57:21.524965 systemd[1]: Reached target swap.target - Swaps. May 13 23:57:21.524980 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:57:21.524993 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:57:21.525006 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:57:21.525018 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:57:21.525030 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:57:21.525043 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:57:21.525055 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:57:21.525071 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:57:21.525083 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:57:21.525096 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:57:21.525109 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:21.525121 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:57:21.525133 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:57:21.525145 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:57:21.525159 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:57:21.525174 systemd[1]: Reached target machines.target - Containers. May 13 23:57:21.525187 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:57:21.525200 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:57:21.525212 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:57:21.525231 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:57:21.525243 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:57:21.525256 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:57:21.525269 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:57:21.525281 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:57:21.525296 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:57:21.525308 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:57:21.525321 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:57:21.525333 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:57:21.526369 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:57:21.526389 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:57:21.526403 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:57:21.526416 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:57:21.526432 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:57:21.526445 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:57:21.526457 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:57:21.526469 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:57:21.526481 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:57:21.526518 systemd-journald[1464]: Collecting audit messages is disabled. May 13 23:57:21.526543 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:57:21.526558 systemd[1]: Stopped verity-setup.service. May 13 23:57:21.526571 systemd-journald[1464]: Journal started May 13 23:57:21.526596 systemd-journald[1464]: Runtime Journal (/run/log/journal/ec2154f1f4d00b93890218301ce5251a) is 4.7M, max 38.1M, 33.3M free. May 13 23:57:21.291530 systemd[1]: Queued start job for default target multi-user.target. May 13 23:57:21.303671 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 13 23:57:21.306173 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:57:21.531378 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:21.538856 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:57:21.541925 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:57:21.544529 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:57:21.545105 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:57:21.545643 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:57:21.546450 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:57:21.547462 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:57:21.548659 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:57:21.550656 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:57:21.550838 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:57:21.551489 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:57:21.551639 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:57:21.552639 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:57:21.552783 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:57:21.575374 kernel: ACPI: bus type drm_connector registered May 13 23:57:21.570282 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:57:21.571703 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:57:21.572106 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:57:21.577717 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:57:21.583280 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:57:21.586564 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:57:21.586614 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:57:21.589680 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:57:21.593381 kernel: fuse: init (API version 7.39) May 13 23:57:21.594584 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:57:21.596554 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:57:21.597386 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:57:21.606383 kernel: loop: module loaded May 13 23:57:21.609661 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:57:21.611917 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:57:21.612380 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:57:21.614474 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:57:21.617138 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:57:21.620144 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:57:21.620498 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:57:21.621191 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:57:21.622405 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:57:21.623136 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:57:21.624920 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:57:21.626296 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:57:21.634125 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:57:21.640731 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:57:21.642307 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:57:21.653029 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:57:21.655948 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:57:21.657495 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:57:21.660812 systemd-journald[1464]: Time spent on flushing to /var/log/journal/ec2154f1f4d00b93890218301ce5251a is 31.729ms for 1005 entries. May 13 23:57:21.660812 systemd-journald[1464]: System Journal (/var/log/journal/ec2154f1f4d00b93890218301ce5251a) is 8M, max 195.6M, 187.6M free. May 13 23:57:21.701487 systemd-journald[1464]: Received client request to flush runtime journal. May 13 23:57:21.688672 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:57:21.691939 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:57:21.698563 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:57:21.699110 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:57:21.702216 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:57:21.703857 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:57:21.727510 kernel: loop0: detected capacity change from 0 to 64352 May 13 23:57:21.744626 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:57:21.751809 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:57:21.786907 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:57:21.790484 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:57:21.802634 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:57:21.807480 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:57:21.817610 udevadm[1524]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 23:57:21.834497 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:57:21.851580 systemd-tmpfiles[1526]: ACLs are not supported, ignoring. May 13 23:57:21.851599 systemd-tmpfiles[1526]: ACLs are not supported, ignoring. May 13 23:57:21.860367 kernel: loop1: detected capacity change from 0 to 151640 May 13 23:57:21.861564 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:57:21.950424 kernel: loop2: detected capacity change from 0 to 210664 May 13 23:57:22.240378 kernel: loop3: detected capacity change from 0 to 109808 May 13 23:57:22.308582 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:57:22.312426 kernel: loop4: detected capacity change from 0 to 64352 May 13 23:57:22.327380 kernel: loop5: detected capacity change from 0 to 151640 May 13 23:57:22.370405 kernel: loop6: detected capacity change from 0 to 210664 May 13 23:57:22.407383 kernel: loop7: detected capacity change from 0 to 109808 May 13 23:57:22.414533 ldconfig[1490]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:57:22.419247 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:57:22.429087 (sd-merge)[1536]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 13 23:57:22.429961 (sd-merge)[1536]: Merged extensions into '/usr'. May 13 23:57:22.437814 systemd[1]: Reload requested from client PID 1499 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:57:22.437991 systemd[1]: Reloading... May 13 23:57:22.564379 zram_generator::config[1570]: No configuration found. May 13 23:57:22.690405 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:57:22.764312 systemd[1]: Reloading finished in 325 ms. May 13 23:57:22.784119 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:57:22.784985 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:57:22.796811 systemd[1]: Starting ensure-sysext.service... May 13 23:57:22.800563 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:57:22.803723 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:57:22.840803 systemd[1]: Reload requested from client PID 1616 ('systemctl') (unit ensure-sysext.service)... May 13 23:57:22.840828 systemd[1]: Reloading... May 13 23:57:22.846446 systemd-tmpfiles[1617]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:57:22.846853 systemd-tmpfiles[1617]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:57:22.849953 systemd-tmpfiles[1617]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:57:22.850435 systemd-tmpfiles[1617]: ACLs are not supported, ignoring. May 13 23:57:22.850525 systemd-tmpfiles[1617]: ACLs are not supported, ignoring. May 13 23:57:22.862545 systemd-tmpfiles[1617]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:57:22.862568 systemd-tmpfiles[1617]: Skipping /boot May 13 23:57:22.869196 systemd-udevd[1618]: Using default interface naming scheme 'v255'. May 13 23:57:22.886323 systemd-tmpfiles[1617]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:57:22.886358 systemd-tmpfiles[1617]: Skipping /boot May 13 23:57:22.981375 zram_generator::config[1654]: No configuration found. May 13 23:57:23.084409 (udev-worker)[1655]: Network interface NamePolicy= disabled on kernel command line. May 13 23:57:23.166691 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1675) May 13 23:57:23.275891 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:57:23.304701 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 13 23:57:23.317424 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 13 23:57:23.323739 kernel: ACPI: button: Power Button [PWRF] May 13 23:57:23.327374 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 May 13 23:57:23.341405 kernel: ACPI: button: Sleep Button [SLPF] May 13 23:57:23.405374 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 May 13 23:57:23.458766 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 23:57:23.459790 systemd[1]: Reloading finished in 618 ms. May 13 23:57:23.474070 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:57:23.475925 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:57:23.498371 kernel: mousedev: PS/2 mouse device common for all mice May 13 23:57:23.529419 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:57:23.566018 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 13 23:57:23.566972 systemd[1]: Finished ensure-sysext.service. May 13 23:57:23.572126 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:23.573587 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:57:23.578511 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:57:23.579622 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:57:23.581597 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:57:23.584646 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:57:23.591051 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:57:23.597890 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:57:23.607378 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:57:23.608232 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:57:23.612709 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:57:23.614071 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:57:23.617603 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:57:23.625212 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:57:23.636721 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:57:23.639005 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:57:23.643088 lvm[1812]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:57:23.646466 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:57:23.650583 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:57:23.651287 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:23.652325 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:57:23.653696 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:57:23.655907 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:57:23.656125 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:57:23.668280 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:57:23.686215 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:57:23.687902 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:57:23.689071 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:57:23.700972 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:57:23.701254 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:57:23.712308 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:57:23.727605 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:57:23.743441 augenrules[1851]: No rules May 13 23:57:23.746804 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:57:23.747484 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:57:23.750191 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:57:23.751231 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:57:23.757015 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:57:23.762048 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:57:23.796922 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:57:23.803936 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:57:23.805525 lvm[1859]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:57:23.808564 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:57:23.817471 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:57:23.821104 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:57:23.848383 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:57:23.857533 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:57:23.878053 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:23.938332 systemd-networkd[1825]: lo: Link UP May 13 23:57:23.938757 systemd-networkd[1825]: lo: Gained carrier May 13 23:57:23.940759 systemd-networkd[1825]: Enumeration completed May 13 23:57:23.941026 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:57:23.942121 systemd-networkd[1825]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:57:23.943336 systemd-networkd[1825]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:57:23.946189 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:57:23.947573 systemd-resolved[1826]: Positive Trust Anchors: May 13 23:57:23.947596 systemd-resolved[1826]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:57:23.947663 systemd-resolved[1826]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:57:23.948172 systemd-networkd[1825]: eth0: Link UP May 13 23:57:23.949289 systemd-networkd[1825]: eth0: Gained carrier May 13 23:57:23.949321 systemd-networkd[1825]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:57:23.950051 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:57:23.958088 systemd-networkd[1825]: eth0: DHCPv4 address 172.31.16.70/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 13 23:57:23.960270 systemd-resolved[1826]: Defaulting to hostname 'linux'. May 13 23:57:23.962619 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:57:23.963522 systemd[1]: Reached target network.target - Network. May 13 23:57:23.964112 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:57:23.964706 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:57:23.965437 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:57:23.965964 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:57:23.966627 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:57:23.967304 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:57:23.967852 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:57:23.968414 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:57:23.968454 systemd[1]: Reached target paths.target - Path Units. May 13 23:57:23.968956 systemd[1]: Reached target timers.target - Timer Units. May 13 23:57:23.972462 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:57:23.975317 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:57:23.979783 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:57:23.980648 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:57:23.981297 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:57:23.991221 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:57:23.992299 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:57:23.993941 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:57:23.994599 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:57:23.995728 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:57:23.996277 systemd[1]: Reached target basic.target - Basic System. May 13 23:57:23.996781 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:57:23.996826 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:57:23.998125 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:57:24.002538 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 13 23:57:24.006521 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:57:24.015608 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:57:24.019544 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:57:24.020148 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:57:24.023820 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:57:24.027734 systemd[1]: Started ntpd.service - Network Time Service. May 13 23:57:24.030661 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:57:24.039261 systemd[1]: Starting setup-oem.service - Setup OEM... May 13 23:57:24.053821 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:57:24.058682 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:57:24.075736 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:57:24.079313 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:57:24.080085 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:57:24.089099 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:57:24.106096 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:57:24.112311 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:57:24.113692 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:57:24.157818 jq[1895]: true May 13 23:57:24.177430 jq[1884]: false May 13 23:57:24.177620 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: ntpd 4.2.8p17@1.4004-o Tue May 13 21:33:08 UTC 2025 (1): Starting May 13 23:57:24.177620 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 13 23:57:24.177620 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: ---------------------------------------------------- May 13 23:57:24.177620 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: ntp-4 is maintained by Network Time Foundation, May 13 23:57:24.177620 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 13 23:57:24.177620 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: corporation. Support and training for ntp-4 are May 13 23:57:24.177620 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: available at https://www.nwtime.org/support May 13 23:57:24.177620 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: ---------------------------------------------------- May 13 23:57:24.168756 (ntainerd)[1904]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:57:24.173809 ntpd[1887]: ntpd 4.2.8p17@1.4004-o Tue May 13 21:33:08 UTC 2025 (1): Starting May 13 23:57:24.174770 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:57:24.173835 ntpd[1887]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 13 23:57:24.175037 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:57:24.173845 ntpd[1887]: ---------------------------------------------------- May 13 23:57:24.173855 ntpd[1887]: ntp-4 is maintained by Network Time Foundation, May 13 23:57:24.173865 ntpd[1887]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 13 23:57:24.173874 ntpd[1887]: corporation. Support and training for ntp-4 are May 13 23:57:24.173884 ntpd[1887]: available at https://www.nwtime.org/support May 13 23:57:24.173893 ntpd[1887]: ---------------------------------------------------- May 13 23:57:24.199923 ntpd[1887]: proto: precision = 0.096 usec (-23) May 13 23:57:24.214500 extend-filesystems[1885]: Found loop4 May 13 23:57:24.214500 extend-filesystems[1885]: Found loop5 May 13 23:57:24.214500 extend-filesystems[1885]: Found loop6 May 13 23:57:24.214500 extend-filesystems[1885]: Found loop7 May 13 23:57:24.214500 extend-filesystems[1885]: Found nvme0n1 May 13 23:57:24.214500 extend-filesystems[1885]: Found nvme0n1p1 May 13 23:57:24.214500 extend-filesystems[1885]: Found nvme0n1p2 May 13 23:57:24.214500 extend-filesystems[1885]: Found nvme0n1p3 May 13 23:57:24.214500 extend-filesystems[1885]: Found usr May 13 23:57:24.214500 extend-filesystems[1885]: Found nvme0n1p4 May 13 23:57:24.214500 extend-filesystems[1885]: Found nvme0n1p6 May 13 23:57:24.214500 extend-filesystems[1885]: Found nvme0n1p7 May 13 23:57:24.214500 extend-filesystems[1885]: Found nvme0n1p9 May 13 23:57:24.289981 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: proto: precision = 0.096 usec (-23) May 13 23:57:24.289981 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: basedate set to 2025-05-01 May 13 23:57:24.289981 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: gps base set to 2025-05-04 (week 2365) May 13 23:57:24.289981 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: Listen and drop on 0 v6wildcard [::]:123 May 13 23:57:24.289981 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 13 23:57:24.289981 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: Listen normally on 2 lo 127.0.0.1:123 May 13 23:57:24.289981 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: Listen normally on 3 eth0 172.31.16.70:123 May 13 23:57:24.289981 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: Listen normally on 4 lo [::1]:123 May 13 23:57:24.289981 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: bind(21) AF_INET6 fe80::45d:36ff:fe4d:7ac5%2#123 flags 0x11 failed: Cannot assign requested address May 13 23:57:24.289981 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: unable to create socket on eth0 (5) for fe80::45d:36ff:fe4d:7ac5%2#123 May 13 23:57:24.289981 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: failed to init interface for address fe80::45d:36ff:fe4d:7ac5%2 May 13 23:57:24.289981 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: Listening on routing socket on fd #21 for interface updates May 13 23:57:24.289981 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 13 23:57:24.289981 ntpd[1887]: 13 May 23:57:24 ntpd[1887]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 13 23:57:24.207485 ntpd[1887]: basedate set to 2025-05-01 May 13 23:57:24.291149 update_engine[1894]: I20250513 23:57:24.286159 1894 main.cc:92] Flatcar Update Engine starting May 13 23:57:24.228675 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:57:24.296579 extend-filesystems[1885]: Checking size of /dev/nvme0n1p9 May 13 23:57:24.207508 ntpd[1887]: gps base set to 2025-05-04 (week 2365) May 13 23:57:24.235245 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:57:24.307032 extend-filesystems[1885]: Resized partition /dev/nvme0n1p9 May 13 23:57:24.318899 coreos-metadata[1882]: May 13 23:57:24.299 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 13 23:57:24.318899 coreos-metadata[1882]: May 13 23:57:24.301 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 13 23:57:24.318899 coreos-metadata[1882]: May 13 23:57:24.303 INFO Fetch successful May 13 23:57:24.318899 coreos-metadata[1882]: May 13 23:57:24.303 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 13 23:57:24.318899 coreos-metadata[1882]: May 13 23:57:24.304 INFO Fetch successful May 13 23:57:24.318899 coreos-metadata[1882]: May 13 23:57:24.304 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 13 23:57:24.318899 coreos-metadata[1882]: May 13 23:57:24.305 INFO Fetch successful May 13 23:57:24.318899 coreos-metadata[1882]: May 13 23:57:24.305 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 13 23:57:24.318899 coreos-metadata[1882]: May 13 23:57:24.306 INFO Fetch successful May 13 23:57:24.318899 coreos-metadata[1882]: May 13 23:57:24.306 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 13 23:57:24.318899 coreos-metadata[1882]: May 13 23:57:24.307 INFO Fetch failed with 404: resource not found May 13 23:57:24.318899 coreos-metadata[1882]: May 13 23:57:24.307 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 13 23:57:24.318899 coreos-metadata[1882]: May 13 23:57:24.308 INFO Fetch successful May 13 23:57:24.318899 coreos-metadata[1882]: May 13 23:57:24.309 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 13 23:57:24.327314 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 13 23:57:24.227993 dbus-daemon[1883]: [system] SELinux support is enabled May 13 23:57:24.235283 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:57:24.327735 tar[1905]: linux-amd64/helm May 13 23:57:24.334388 extend-filesystems[1931]: resize2fs 1.47.2 (1-Jan-2025) May 13 23:57:24.351691 update_engine[1894]: I20250513 23:57:24.321777 1894 update_check_scheduler.cc:74] Next update check in 11m8s May 13 23:57:24.351741 coreos-metadata[1882]: May 13 23:57:24.325 INFO Fetch successful May 13 23:57:24.351741 coreos-metadata[1882]: May 13 23:57:24.325 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 13 23:57:24.351741 coreos-metadata[1882]: May 13 23:57:24.326 INFO Fetch successful May 13 23:57:24.351741 coreos-metadata[1882]: May 13 23:57:24.326 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 13 23:57:24.351741 coreos-metadata[1882]: May 13 23:57:24.328 INFO Fetch successful May 13 23:57:24.351741 coreos-metadata[1882]: May 13 23:57:24.328 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 13 23:57:24.351741 coreos-metadata[1882]: May 13 23:57:24.330 INFO Fetch successful May 13 23:57:24.241798 ntpd[1887]: Listen and drop on 0 v6wildcard [::]:123 May 13 23:57:24.236812 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:57:24.352189 jq[1910]: true May 13 23:57:24.241855 ntpd[1887]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 13 23:57:24.236836 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:57:24.242049 ntpd[1887]: Listen normally on 2 lo 127.0.0.1:123 May 13 23:57:24.288628 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 13 23:57:24.242091 ntpd[1887]: Listen normally on 3 eth0 172.31.16.70:123 May 13 23:57:24.289622 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:57:24.242134 ntpd[1887]: Listen normally on 4 lo [::1]:123 May 13 23:57:24.290429 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:57:24.242187 ntpd[1887]: bind(21) AF_INET6 fe80::45d:36ff:fe4d:7ac5%2#123 flags 0x11 failed: Cannot assign requested address May 13 23:57:24.310825 systemd[1]: Started update-engine.service - Update Engine. May 13 23:57:24.242210 ntpd[1887]: unable to create socket on eth0 (5) for fe80::45d:36ff:fe4d:7ac5%2#123 May 13 23:57:24.349169 systemd-logind[1892]: Watching system buttons on /dev/input/event1 (Power Button) May 13 23:57:24.242228 ntpd[1887]: failed to init interface for address fe80::45d:36ff:fe4d:7ac5%2 May 13 23:57:24.349195 systemd-logind[1892]: Watching system buttons on /dev/input/event2 (Sleep Button) May 13 23:57:24.242261 ntpd[1887]: Listening on routing socket on fd #21 for interface updates May 13 23:57:24.349236 systemd-logind[1892]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 23:57:24.267878 dbus-daemon[1883]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1825 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 13 23:57:24.351510 systemd-logind[1892]: New seat seat0. May 13 23:57:24.278159 ntpd[1887]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 13 23:57:24.278198 ntpd[1887]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 13 23:57:24.374284 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:57:24.379148 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:57:24.382920 systemd[1]: Finished setup-oem.service - Setup OEM. May 13 23:57:24.443765 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 13 23:57:24.444959 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:57:24.455303 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 13 23:57:24.455405 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1655) May 13 23:57:24.473801 extend-filesystems[1931]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 13 23:57:24.473801 extend-filesystems[1931]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 23:57:24.473801 extend-filesystems[1931]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 13 23:57:24.496212 extend-filesystems[1885]: Resized filesystem in /dev/nvme0n1p9 May 13 23:57:24.474780 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:57:24.476473 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:57:24.524622 bash[1982]: Updated "/home/core/.ssh/authorized_keys" May 13 23:57:24.520497 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:57:24.533487 systemd[1]: Starting sshkeys.service... May 13 23:57:24.680196 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 13 23:57:24.684834 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 13 23:57:24.781672 coreos-metadata[2047]: May 13 23:57:24.781 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 13 23:57:24.785146 coreos-metadata[2047]: May 13 23:57:24.784 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 13 23:57:24.785937 coreos-metadata[2047]: May 13 23:57:24.785 INFO Fetch successful May 13 23:57:24.785937 coreos-metadata[2047]: May 13 23:57:24.785 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 13 23:57:24.787306 coreos-metadata[2047]: May 13 23:57:24.786 INFO Fetch successful May 13 23:57:24.790310 unknown[2047]: wrote ssh authorized keys file for user: core May 13 23:57:24.792138 locksmithd[1935]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:57:24.853756 update-ssh-keys[2064]: Updated "/home/core/.ssh/authorized_keys" May 13 23:57:24.852384 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 13 23:57:24.855701 systemd[1]: Finished sshkeys.service. May 13 23:57:24.919891 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 13 23:57:24.924903 dbus-daemon[1883]: [system] Successfully activated service 'org.freedesktop.hostname1' May 13 23:57:24.932659 dbus-daemon[1883]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1926 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 13 23:57:24.939673 systemd[1]: Starting polkit.service - Authorization Manager... May 13 23:57:24.986205 polkitd[2073]: Started polkitd version 121 May 13 23:57:25.001306 polkitd[2073]: Loading rules from directory /etc/polkit-1/rules.d May 13 23:57:25.002950 polkitd[2073]: Loading rules from directory /usr/share/polkit-1/rules.d May 13 23:57:25.004248 polkitd[2073]: Finished loading, compiling and executing 2 rules May 13 23:57:25.008895 dbus-daemon[1883]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 13 23:57:25.009553 systemd[1]: Started polkit.service - Authorization Manager. May 13 23:57:25.011395 polkitd[2073]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 13 23:57:25.035652 systemd-resolved[1826]: System hostname changed to 'ip-172-31-16-70'. May 13 23:57:25.035748 systemd-hostnamed[1926]: Hostname set to (transient) May 13 23:57:25.110522 containerd[1904]: time="2025-05-13T23:57:25Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:57:25.114844 containerd[1904]: time="2025-05-13T23:57:25.114804229Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:57:25.162670 containerd[1904]: time="2025-05-13T23:57:25.162606726Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.702µs" May 13 23:57:25.162670 containerd[1904]: time="2025-05-13T23:57:25.162664463Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:57:25.162836 containerd[1904]: time="2025-05-13T23:57:25.162695464Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:57:25.162931 containerd[1904]: time="2025-05-13T23:57:25.162907935Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:57:25.162972 containerd[1904]: time="2025-05-13T23:57:25.162940641Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:57:25.163008 containerd[1904]: time="2025-05-13T23:57:25.162984828Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:57:25.163084 containerd[1904]: time="2025-05-13T23:57:25.163062619Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:57:25.163131 containerd[1904]: time="2025-05-13T23:57:25.163092565Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:57:25.164368 containerd[1904]: time="2025-05-13T23:57:25.163440961Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:57:25.164368 containerd[1904]: time="2025-05-13T23:57:25.163465653Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:57:25.164368 containerd[1904]: time="2025-05-13T23:57:25.163489181Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:57:25.164368 containerd[1904]: time="2025-05-13T23:57:25.163508833Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:57:25.164368 containerd[1904]: time="2025-05-13T23:57:25.163609647Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:57:25.164368 containerd[1904]: time="2025-05-13T23:57:25.163867639Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:57:25.164368 containerd[1904]: time="2025-05-13T23:57:25.163909152Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:57:25.164368 containerd[1904]: time="2025-05-13T23:57:25.163925789Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:57:25.164368 containerd[1904]: time="2025-05-13T23:57:25.163962146Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:57:25.168731 containerd[1904]: time="2025-05-13T23:57:25.168687382Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:57:25.168849 containerd[1904]: time="2025-05-13T23:57:25.168820878Z" level=info msg="metadata content store policy set" policy=shared May 13 23:57:25.174003 containerd[1904]: time="2025-05-13T23:57:25.173961874Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:57:25.174276 ntpd[1887]: bind(24) AF_INET6 fe80::45d:36ff:fe4d:7ac5%2#123 flags 0x11 failed: Cannot assign requested address May 13 23:57:25.174319 ntpd[1887]: unable to create socket on eth0 (6) for fe80::45d:36ff:fe4d:7ac5%2#123 May 13 23:57:25.174690 ntpd[1887]: 13 May 23:57:25 ntpd[1887]: bind(24) AF_INET6 fe80::45d:36ff:fe4d:7ac5%2#123 flags 0x11 failed: Cannot assign requested address May 13 23:57:25.174690 ntpd[1887]: 13 May 23:57:25 ntpd[1887]: unable to create socket on eth0 (6) for fe80::45d:36ff:fe4d:7ac5%2#123 May 13 23:57:25.174690 ntpd[1887]: 13 May 23:57:25 ntpd[1887]: failed to init interface for address fe80::45d:36ff:fe4d:7ac5%2 May 13 23:57:25.174335 ntpd[1887]: failed to init interface for address fe80::45d:36ff:fe4d:7ac5%2 May 13 23:57:25.174838 containerd[1904]: time="2025-05-13T23:57:25.174723409Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:57:25.174838 containerd[1904]: time="2025-05-13T23:57:25.174754640Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:57:25.174838 containerd[1904]: time="2025-05-13T23:57:25.174775900Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:57:25.174838 containerd[1904]: time="2025-05-13T23:57:25.174795290Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:57:25.174838 containerd[1904]: time="2025-05-13T23:57:25.174814193Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:57:25.176521 containerd[1904]: time="2025-05-13T23:57:25.174834817Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:57:25.176521 containerd[1904]: time="2025-05-13T23:57:25.174854565Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:57:25.176521 containerd[1904]: time="2025-05-13T23:57:25.174882286Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:57:25.176521 containerd[1904]: time="2025-05-13T23:57:25.174899351Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:57:25.176521 containerd[1904]: time="2025-05-13T23:57:25.174915691Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:57:25.176521 containerd[1904]: time="2025-05-13T23:57:25.174933871Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:57:25.176521 containerd[1904]: time="2025-05-13T23:57:25.175077808Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:57:25.176521 containerd[1904]: time="2025-05-13T23:57:25.175101848Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:57:25.176521 containerd[1904]: time="2025-05-13T23:57:25.175120727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:57:25.176521 containerd[1904]: time="2025-05-13T23:57:25.175138935Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:57:25.176521 containerd[1904]: time="2025-05-13T23:57:25.175155644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:57:25.176521 containerd[1904]: time="2025-05-13T23:57:25.175170714Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:57:25.176521 containerd[1904]: time="2025-05-13T23:57:25.175187290Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:57:25.176521 containerd[1904]: time="2025-05-13T23:57:25.175202270Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:57:25.176521 containerd[1904]: time="2025-05-13T23:57:25.175219714Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:57:25.177043 containerd[1904]: time="2025-05-13T23:57:25.175236415Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:57:25.177043 containerd[1904]: time="2025-05-13T23:57:25.175252588Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:57:25.177043 containerd[1904]: time="2025-05-13T23:57:25.175328359Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:57:25.177043 containerd[1904]: time="2025-05-13T23:57:25.175345248Z" level=info msg="Start snapshots syncer" May 13 23:57:25.177043 containerd[1904]: time="2025-05-13T23:57:25.175387335Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:57:25.177222 containerd[1904]: time="2025-05-13T23:57:25.175706626Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:57:25.177222 containerd[1904]: time="2025-05-13T23:57:25.175771189Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:57:25.177435 containerd[1904]: time="2025-05-13T23:57:25.175860287Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:57:25.177435 containerd[1904]: time="2025-05-13T23:57:25.175974518Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:57:25.177435 containerd[1904]: time="2025-05-13T23:57:25.176003268Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:57:25.177435 containerd[1904]: time="2025-05-13T23:57:25.176020028Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:57:25.177435 containerd[1904]: time="2025-05-13T23:57:25.176035227Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:57:25.177435 containerd[1904]: time="2025-05-13T23:57:25.176075411Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:57:25.177435 containerd[1904]: time="2025-05-13T23:57:25.176091205Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:57:25.177435 containerd[1904]: time="2025-05-13T23:57:25.176107129Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:57:25.177435 containerd[1904]: time="2025-05-13T23:57:25.176138086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:57:25.177435 containerd[1904]: time="2025-05-13T23:57:25.176166394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:57:25.177435 containerd[1904]: time="2025-05-13T23:57:25.176190842Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:57:25.177435 containerd[1904]: time="2025-05-13T23:57:25.176232513Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:57:25.177435 containerd[1904]: time="2025-05-13T23:57:25.176251532Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:57:25.177435 containerd[1904]: time="2025-05-13T23:57:25.176265534Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:57:25.177922 containerd[1904]: time="2025-05-13T23:57:25.176279439Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:57:25.177922 containerd[1904]: time="2025-05-13T23:57:25.176292389Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:57:25.177922 containerd[1904]: time="2025-05-13T23:57:25.176307166Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:57:25.177922 containerd[1904]: time="2025-05-13T23:57:25.176322013Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:57:25.180362 containerd[1904]: time="2025-05-13T23:57:25.176341971Z" level=info msg="runtime interface created" May 13 23:57:25.180362 containerd[1904]: time="2025-05-13T23:57:25.179309684Z" level=info msg="created NRI interface" May 13 23:57:25.180362 containerd[1904]: time="2025-05-13T23:57:25.179454967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:57:25.180362 containerd[1904]: time="2025-05-13T23:57:25.179485991Z" level=info msg="Connect containerd service" May 13 23:57:25.180362 containerd[1904]: time="2025-05-13T23:57:25.179529842Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:57:25.182095 containerd[1904]: time="2025-05-13T23:57:25.182061358Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:57:25.210514 systemd-networkd[1825]: eth0: Gained IPv6LL May 13 23:57:25.214959 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:57:25.216076 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:57:25.222253 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 13 23:57:25.226592 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:25.234643 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:57:25.327428 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:57:25.402627 amazon-ssm-agent[2088]: Initializing new seelog logger May 13 23:57:25.406377 amazon-ssm-agent[2088]: New Seelog Logger Creation Complete May 13 23:57:25.406377 amazon-ssm-agent[2088]: 2025/05/13 23:57:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:57:25.406377 amazon-ssm-agent[2088]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:57:25.406377 amazon-ssm-agent[2088]: 2025/05/13 23:57:25 processing appconfig overrides May 13 23:57:25.406841 amazon-ssm-agent[2088]: 2025/05/13 23:57:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:57:25.406932 amazon-ssm-agent[2088]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:57:25.407075 amazon-ssm-agent[2088]: 2025/05/13 23:57:25 processing appconfig overrides May 13 23:57:25.408743 amazon-ssm-agent[2088]: 2025/05/13 23:57:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:57:25.408819 amazon-ssm-agent[2088]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:57:25.408973 amazon-ssm-agent[2088]: 2025/05/13 23:57:25 processing appconfig overrides May 13 23:57:25.409570 amazon-ssm-agent[2088]: 2025-05-13 23:57:25 INFO Proxy environment variables: May 13 23:57:25.414645 amazon-ssm-agent[2088]: 2025/05/13 23:57:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:57:25.414752 amazon-ssm-agent[2088]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 13 23:57:25.414928 amazon-ssm-agent[2088]: 2025/05/13 23:57:25 processing appconfig overrides May 13 23:57:25.510815 amazon-ssm-agent[2088]: 2025-05-13 23:57:25 INFO https_proxy: May 13 23:57:25.579451 containerd[1904]: time="2025-05-13T23:57:25.579311329Z" level=info msg="Start subscribing containerd event" May 13 23:57:25.579451 containerd[1904]: time="2025-05-13T23:57:25.579401554Z" level=info msg="Start recovering state" May 13 23:57:25.581371 containerd[1904]: time="2025-05-13T23:57:25.580975055Z" level=info msg="Start event monitor" May 13 23:57:25.581371 containerd[1904]: time="2025-05-13T23:57:25.581021399Z" level=info msg="Start cni network conf syncer for default" May 13 23:57:25.581371 containerd[1904]: time="2025-05-13T23:57:25.581044416Z" level=info msg="Start streaming server" May 13 23:57:25.581371 containerd[1904]: time="2025-05-13T23:57:25.581059697Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:57:25.581371 containerd[1904]: time="2025-05-13T23:57:25.581070874Z" level=info msg="runtime interface starting up..." May 13 23:57:25.581371 containerd[1904]: time="2025-05-13T23:57:25.581093270Z" level=info msg="starting plugins..." May 13 23:57:25.581371 containerd[1904]: time="2025-05-13T23:57:25.581110129Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:57:25.586374 containerd[1904]: time="2025-05-13T23:57:25.585466117Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:57:25.586374 containerd[1904]: time="2025-05-13T23:57:25.585589573Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:57:25.588544 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:57:25.592289 containerd[1904]: time="2025-05-13T23:57:25.589300235Z" level=info msg="containerd successfully booted in 0.479296s" May 13 23:57:25.613648 amazon-ssm-agent[2088]: 2025-05-13 23:57:25 INFO http_proxy: May 13 23:57:25.710420 amazon-ssm-agent[2088]: 2025-05-13 23:57:25 INFO no_proxy: May 13 23:57:25.716935 sshd_keygen[1925]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:57:25.771757 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:57:25.776662 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:57:25.805241 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:57:25.806802 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:57:25.808803 amazon-ssm-agent[2088]: 2025-05-13 23:57:25 INFO Checking if agent identity type OnPrem can be assumed May 13 23:57:25.813514 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:57:25.848769 tar[1905]: linux-amd64/LICENSE May 13 23:57:25.848769 tar[1905]: linux-amd64/README.md May 13 23:57:25.849847 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:57:25.862165 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:57:25.870768 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 23:57:25.872581 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:57:25.890411 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:57:25.907319 amazon-ssm-agent[2088]: 2025-05-13 23:57:25 INFO Checking if agent identity type EC2 can be assumed May 13 23:57:26.006606 amazon-ssm-agent[2088]: 2025-05-13 23:57:25 INFO Agent will take identity from EC2 May 13 23:57:26.019706 amazon-ssm-agent[2088]: 2025-05-13 23:57:25 INFO [amazon-ssm-agent] using named pipe channel for IPC May 13 23:57:26.019706 amazon-ssm-agent[2088]: 2025-05-13 23:57:25 INFO [amazon-ssm-agent] using named pipe channel for IPC May 13 23:57:26.019706 amazon-ssm-agent[2088]: 2025-05-13 23:57:25 INFO [amazon-ssm-agent] using named pipe channel for IPC May 13 23:57:26.019706 amazon-ssm-agent[2088]: 2025-05-13 23:57:25 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 13 23:57:26.019706 amazon-ssm-agent[2088]: 2025-05-13 23:57:25 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 May 13 23:57:26.019949 amazon-ssm-agent[2088]: 2025-05-13 23:57:25 INFO [amazon-ssm-agent] Starting Core Agent May 13 23:57:26.019949 amazon-ssm-agent[2088]: 2025-05-13 23:57:25 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 13 23:57:26.019949 amazon-ssm-agent[2088]: 2025-05-13 23:57:25 INFO [Registrar] Starting registrar module May 13 23:57:26.019949 amazon-ssm-agent[2088]: 2025-05-13 23:57:25 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 13 23:57:26.019949 amazon-ssm-agent[2088]: 2025-05-13 23:57:25 INFO [EC2Identity] EC2 registration was successful. May 13 23:57:26.019949 amazon-ssm-agent[2088]: 2025-05-13 23:57:25 INFO [CredentialRefresher] credentialRefresher has started May 13 23:57:26.019949 amazon-ssm-agent[2088]: 2025-05-13 23:57:25 INFO [CredentialRefresher] Starting credentials refresher loop May 13 23:57:26.019949 amazon-ssm-agent[2088]: 2025-05-13 23:57:26 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 13 23:57:26.105662 amazon-ssm-agent[2088]: 2025-05-13 23:57:26 INFO [CredentialRefresher] Next credential rotation will be in 31.899994385566668 minutes May 13 23:57:26.869992 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:57:26.872271 systemd[1]: Started sshd@0-172.31.16.70:22-147.75.109.163:49184.service - OpenSSH per-connection server daemon (147.75.109.163:49184). May 13 23:57:27.033710 amazon-ssm-agent[2088]: 2025-05-13 23:57:27 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 13 23:57:27.081651 sshd[2136]: Accepted publickey for core from 147.75.109.163 port 49184 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:57:27.084125 sshd-session[2136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:27.091548 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:57:27.094547 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:57:27.107776 systemd-logind[1892]: New session 1 of user core. May 13 23:57:27.136432 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:57:27.137248 amazon-ssm-agent[2088]: 2025-05-13 23:57:27 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2140) started May 13 23:57:27.145048 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:57:27.162197 (systemd)[2148]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:57:27.167632 systemd-logind[1892]: New session c1 of user core. May 13 23:57:27.237361 amazon-ssm-agent[2088]: 2025-05-13 23:57:27 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 13 23:57:27.346362 systemd[2148]: Queued start job for default target default.target. May 13 23:57:27.354757 systemd[2148]: Created slice app.slice - User Application Slice. May 13 23:57:27.354801 systemd[2148]: Reached target paths.target - Paths. May 13 23:57:27.354948 systemd[2148]: Reached target timers.target - Timers. May 13 23:57:27.356318 systemd[2148]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:57:27.369990 systemd[2148]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:57:27.370991 systemd[2148]: Reached target sockets.target - Sockets. May 13 23:57:27.371064 systemd[2148]: Reached target basic.target - Basic System. May 13 23:57:27.371117 systemd[2148]: Reached target default.target - Main User Target. May 13 23:57:27.371154 systemd[2148]: Startup finished in 192ms. May 13 23:57:27.371331 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:57:27.375679 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:57:27.522580 systemd[1]: Started sshd@1-172.31.16.70:22-147.75.109.163:49194.service - OpenSSH per-connection server daemon (147.75.109.163:49194). May 13 23:57:27.690107 sshd[2165]: Accepted publickey for core from 147.75.109.163 port 49194 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:57:27.691520 sshd-session[2165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:27.697554 systemd-logind[1892]: New session 2 of user core. May 13 23:57:27.709615 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:57:27.827860 sshd[2167]: Connection closed by 147.75.109.163 port 49194 May 13 23:57:27.828658 sshd-session[2165]: pam_unix(sshd:session): session closed for user core May 13 23:57:27.831519 systemd[1]: sshd@1-172.31.16.70:22-147.75.109.163:49194.service: Deactivated successfully. May 13 23:57:27.833866 systemd[1]: session-2.scope: Deactivated successfully. May 13 23:57:27.834763 systemd-logind[1892]: Session 2 logged out. Waiting for processes to exit. May 13 23:57:27.835583 systemd-logind[1892]: Removed session 2. May 13 23:57:27.857461 systemd[1]: Started sshd@2-172.31.16.70:22-147.75.109.163:49206.service - OpenSSH per-connection server daemon (147.75.109.163:49206). May 13 23:57:28.031770 sshd[2173]: Accepted publickey for core from 147.75.109.163 port 49206 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:57:28.033162 sshd-session[2173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:28.037845 systemd-logind[1892]: New session 3 of user core. May 13 23:57:28.045636 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:57:28.162639 sshd[2175]: Connection closed by 147.75.109.163 port 49206 May 13 23:57:28.163517 sshd-session[2173]: pam_unix(sshd:session): session closed for user core May 13 23:57:28.166219 systemd[1]: sshd@2-172.31.16.70:22-147.75.109.163:49206.service: Deactivated successfully. May 13 23:57:28.168072 systemd[1]: session-3.scope: Deactivated successfully. May 13 23:57:28.169491 systemd-logind[1892]: Session 3 logged out. Waiting for processes to exit. May 13 23:57:28.170422 systemd-logind[1892]: Removed session 3. May 13 23:57:28.174235 ntpd[1887]: Listen normally on 7 eth0 [fe80::45d:36ff:fe4d:7ac5%2]:123 May 13 23:57:28.174522 ntpd[1887]: 13 May 23:57:28 ntpd[1887]: Listen normally on 7 eth0 [fe80::45d:36ff:fe4d:7ac5%2]:123 May 13 23:57:29.008621 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:29.010147 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:57:29.012446 systemd[1]: Startup finished in 603ms (kernel) + 6.879s (initrd) + 8.474s (userspace) = 15.957s. May 13 23:57:29.023912 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:57:32.893532 systemd-resolved[1826]: Clock change detected. Flushing caches. May 13 23:57:32.896589 kubelet[2185]: E0513 23:57:32.896506 2185 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:57:32.899376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:57:32.899528 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:57:32.899813 systemd[1]: kubelet.service: Consumed 1.009s CPU time, 244.9M memory peak. May 13 23:57:39.912969 systemd[1]: Started sshd@3-172.31.16.70:22-147.75.109.163:34522.service - OpenSSH per-connection server daemon (147.75.109.163:34522). May 13 23:57:40.081312 sshd[2198]: Accepted publickey for core from 147.75.109.163 port 34522 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:57:40.082603 sshd-session[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:40.087306 systemd-logind[1892]: New session 4 of user core. May 13 23:57:40.093457 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:57:40.212373 sshd[2200]: Connection closed by 147.75.109.163 port 34522 May 13 23:57:40.213493 sshd-session[2198]: pam_unix(sshd:session): session closed for user core May 13 23:57:40.216771 systemd[1]: sshd@3-172.31.16.70:22-147.75.109.163:34522.service: Deactivated successfully. May 13 23:57:40.218810 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:57:40.220432 systemd-logind[1892]: Session 4 logged out. Waiting for processes to exit. May 13 23:57:40.221739 systemd-logind[1892]: Removed session 4. May 13 23:57:40.248392 systemd[1]: Started sshd@4-172.31.16.70:22-147.75.109.163:34526.service - OpenSSH per-connection server daemon (147.75.109.163:34526). May 13 23:57:40.420373 sshd[2206]: Accepted publickey for core from 147.75.109.163 port 34526 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:57:40.421752 sshd-session[2206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:40.426340 systemd-logind[1892]: New session 5 of user core. May 13 23:57:40.432509 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:57:40.550375 sshd[2208]: Connection closed by 147.75.109.163 port 34526 May 13 23:57:40.551474 sshd-session[2206]: pam_unix(sshd:session): session closed for user core May 13 23:57:40.554582 systemd[1]: sshd@4-172.31.16.70:22-147.75.109.163:34526.service: Deactivated successfully. May 13 23:57:40.556695 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:57:40.558363 systemd-logind[1892]: Session 5 logged out. Waiting for processes to exit. May 13 23:57:40.559443 systemd-logind[1892]: Removed session 5. May 13 23:57:40.584231 systemd[1]: Started sshd@5-172.31.16.70:22-147.75.109.163:34534.service - OpenSSH per-connection server daemon (147.75.109.163:34534). May 13 23:57:40.752440 sshd[2214]: Accepted publickey for core from 147.75.109.163 port 34534 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:57:40.753773 sshd-session[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:40.758542 systemd-logind[1892]: New session 6 of user core. May 13 23:57:40.767431 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:57:40.883098 sshd[2216]: Connection closed by 147.75.109.163 port 34534 May 13 23:57:40.883973 sshd-session[2214]: pam_unix(sshd:session): session closed for user core May 13 23:57:40.887438 systemd[1]: sshd@5-172.31.16.70:22-147.75.109.163:34534.service: Deactivated successfully. May 13 23:57:40.889183 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:57:40.890486 systemd-logind[1892]: Session 6 logged out. Waiting for processes to exit. May 13 23:57:40.891511 systemd-logind[1892]: Removed session 6. May 13 23:57:40.917593 systemd[1]: Started sshd@6-172.31.16.70:22-147.75.109.163:34542.service - OpenSSH per-connection server daemon (147.75.109.163:34542). May 13 23:57:41.089492 sshd[2222]: Accepted publickey for core from 147.75.109.163 port 34542 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:57:41.090811 sshd-session[2222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:41.095936 systemd-logind[1892]: New session 7 of user core. May 13 23:57:41.103466 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:57:41.216344 sudo[2225]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:57:41.216628 sudo[2225]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:57:41.230188 sudo[2225]: pam_unix(sudo:session): session closed for user root May 13 23:57:41.253042 sshd[2224]: Connection closed by 147.75.109.163 port 34542 May 13 23:57:41.253777 sshd-session[2222]: pam_unix(sshd:session): session closed for user core May 13 23:57:41.257090 systemd[1]: sshd@6-172.31.16.70:22-147.75.109.163:34542.service: Deactivated successfully. May 13 23:57:41.258797 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:57:41.260033 systemd-logind[1892]: Session 7 logged out. Waiting for processes to exit. May 13 23:57:41.261113 systemd-logind[1892]: Removed session 7. May 13 23:57:41.286964 systemd[1]: Started sshd@7-172.31.16.70:22-147.75.109.163:34554.service - OpenSSH per-connection server daemon (147.75.109.163:34554). May 13 23:57:41.465048 sshd[2231]: Accepted publickey for core from 147.75.109.163 port 34554 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:57:41.466518 sshd-session[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:41.471036 systemd-logind[1892]: New session 8 of user core. May 13 23:57:41.474408 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:57:41.577808 sudo[2235]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:57:41.578089 sudo[2235]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:57:41.581760 sudo[2235]: pam_unix(sudo:session): session closed for user root May 13 23:57:41.587373 sudo[2234]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:57:41.587657 sudo[2234]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:57:41.598052 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:57:41.646751 augenrules[2257]: No rules May 13 23:57:41.648291 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:57:41.648565 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:57:41.650154 sudo[2234]: pam_unix(sudo:session): session closed for user root May 13 23:57:41.673048 sshd[2233]: Connection closed by 147.75.109.163 port 34554 May 13 23:57:41.673607 sshd-session[2231]: pam_unix(sshd:session): session closed for user core May 13 23:57:41.676439 systemd[1]: sshd@7-172.31.16.70:22-147.75.109.163:34554.service: Deactivated successfully. May 13 23:57:41.678117 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:57:41.679372 systemd-logind[1892]: Session 8 logged out. Waiting for processes to exit. May 13 23:57:41.680695 systemd-logind[1892]: Removed session 8. May 13 23:57:41.709318 systemd[1]: Started sshd@8-172.31.16.70:22-147.75.109.163:34566.service - OpenSSH per-connection server daemon (147.75.109.163:34566). May 13 23:57:41.877666 sshd[2266]: Accepted publickey for core from 147.75.109.163 port 34566 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:57:41.880173 sshd-session[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:41.885656 systemd-logind[1892]: New session 9 of user core. May 13 23:57:41.892443 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:57:41.987355 sudo[2269]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:57:41.987635 sudo[2269]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:57:42.396517 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:57:42.410828 (dockerd)[2286]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:57:42.733727 dockerd[2286]: time="2025-05-13T23:57:42.733002778Z" level=info msg="Starting up" May 13 23:57:42.734223 dockerd[2286]: time="2025-05-13T23:57:42.734191353Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:57:42.762309 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3061826293-merged.mount: Deactivated successfully. May 13 23:57:42.802026 dockerd[2286]: time="2025-05-13T23:57:42.801805521Z" level=info msg="Loading containers: start." May 13 23:57:42.975223 kernel: Initializing XFRM netlink socket May 13 23:57:42.977725 (udev-worker)[2310]: Network interface NamePolicy= disabled on kernel command line. May 13 23:57:42.990469 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:57:42.994442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:43.070552 systemd-networkd[1825]: docker0: Link UP May 13 23:57:43.124256 dockerd[2286]: time="2025-05-13T23:57:43.123772634Z" level=info msg="Loading containers: done." May 13 23:57:43.157412 dockerd[2286]: time="2025-05-13T23:57:43.157356633Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:57:43.157594 dockerd[2286]: time="2025-05-13T23:57:43.157451591Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:57:43.157594 dockerd[2286]: time="2025-05-13T23:57:43.157558523Z" level=info msg="Daemon has completed initialization" May 13 23:57:43.206723 dockerd[2286]: time="2025-05-13T23:57:43.206579955Z" level=info msg="API listen on /run/docker.sock" May 13 23:57:43.207278 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:57:43.219603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:43.231766 (kubelet)[2486]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:57:43.289836 kubelet[2486]: E0513 23:57:43.289675 2486 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:57:43.295300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:57:43.295501 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:57:43.296259 systemd[1]: kubelet.service: Consumed 182ms CPU time, 95.7M memory peak. May 13 23:57:44.378166 containerd[1904]: time="2025-05-13T23:57:44.377843220Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 23:57:44.934002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4266873258.mount: Deactivated successfully. May 13 23:57:46.390568 containerd[1904]: time="2025-05-13T23:57:46.390491281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:46.392277 containerd[1904]: time="2025-05-13T23:57:46.392185082Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 13 23:57:46.393654 containerd[1904]: time="2025-05-13T23:57:46.393603106Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:46.396307 containerd[1904]: time="2025-05-13T23:57:46.396272603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:46.397305 containerd[1904]: time="2025-05-13T23:57:46.397139012Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.019259689s" May 13 23:57:46.397305 containerd[1904]: time="2025-05-13T23:57:46.397172879Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 13 23:57:46.415462 containerd[1904]: time="2025-05-13T23:57:46.415426380Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 23:57:48.079506 containerd[1904]: time="2025-05-13T23:57:48.079447622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:48.080790 containerd[1904]: time="2025-05-13T23:57:48.080732310Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 13 23:57:48.081627 containerd[1904]: time="2025-05-13T23:57:48.081565669Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:48.084229 containerd[1904]: time="2025-05-13T23:57:48.084154009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:48.085246 containerd[1904]: time="2025-05-13T23:57:48.085192380Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.669727087s" May 13 23:57:48.085332 containerd[1904]: time="2025-05-13T23:57:48.085254430Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 13 23:57:48.111426 containerd[1904]: time="2025-05-13T23:57:48.111165623Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 23:57:49.223066 containerd[1904]: time="2025-05-13T23:57:49.223012618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:49.225010 containerd[1904]: time="2025-05-13T23:57:49.224947966Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 13 23:57:49.226144 containerd[1904]: time="2025-05-13T23:57:49.226099348Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:49.228559 containerd[1904]: time="2025-05-13T23:57:49.228511518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:49.229556 containerd[1904]: time="2025-05-13T23:57:49.229429893Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.118228609s" May 13 23:57:49.229556 containerd[1904]: time="2025-05-13T23:57:49.229459454Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 13 23:57:49.247436 containerd[1904]: time="2025-05-13T23:57:49.247394352Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 23:57:50.300439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1128478894.mount: Deactivated successfully. May 13 23:57:50.810920 containerd[1904]: time="2025-05-13T23:57:50.810850540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:50.812251 containerd[1904]: time="2025-05-13T23:57:50.812015998Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 13 23:57:50.813231 containerd[1904]: time="2025-05-13T23:57:50.813145263Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:50.817625 containerd[1904]: time="2025-05-13T23:57:50.817564459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:50.818148 containerd[1904]: time="2025-05-13T23:57:50.817984304Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.570545465s" May 13 23:57:50.818148 containerd[1904]: time="2025-05-13T23:57:50.818017580Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 13 23:57:50.836759 containerd[1904]: time="2025-05-13T23:57:50.836706364Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 23:57:51.330680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1922032514.mount: Deactivated successfully. May 13 23:57:52.214547 containerd[1904]: time="2025-05-13T23:57:52.214485268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:52.216646 containerd[1904]: time="2025-05-13T23:57:52.216555174Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 13 23:57:52.218679 containerd[1904]: time="2025-05-13T23:57:52.218613903Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:52.221993 containerd[1904]: time="2025-05-13T23:57:52.221004902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:52.221993 containerd[1904]: time="2025-05-13T23:57:52.221768644Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.385021649s" May 13 23:57:52.221993 containerd[1904]: time="2025-05-13T23:57:52.221801038Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 23:57:52.241751 containerd[1904]: time="2025-05-13T23:57:52.241711314Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 23:57:52.708533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3435698128.mount: Deactivated successfully. May 13 23:57:52.722377 containerd[1904]: time="2025-05-13T23:57:52.722325622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:52.724303 containerd[1904]: time="2025-05-13T23:57:52.724229574Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 13 23:57:52.726652 containerd[1904]: time="2025-05-13T23:57:52.726567575Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:52.730145 containerd[1904]: time="2025-05-13T23:57:52.730072147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:52.730845 containerd[1904]: time="2025-05-13T23:57:52.730806522Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 489.057815ms" May 13 23:57:52.730845 containerd[1904]: time="2025-05-13T23:57:52.730840690Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 13 23:57:52.749157 containerd[1904]: time="2025-05-13T23:57:52.748901180Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 23:57:53.291338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1949562893.mount: Deactivated successfully. May 13 23:57:53.550303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 23:57:53.555395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:53.897387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:53.908720 (kubelet)[2709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:57:53.993667 kubelet[2709]: E0513 23:57:53.993532 2709 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:57:53.996519 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:57:53.996908 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:57:53.997610 systemd[1]: kubelet.service: Consumed 180ms CPU time, 96.3M memory peak. May 13 23:57:56.000303 containerd[1904]: time="2025-05-13T23:57:56.000234616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:56.002147 containerd[1904]: time="2025-05-13T23:57:56.002083132Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 13 23:57:56.004367 containerd[1904]: time="2025-05-13T23:57:56.004300889Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:56.008263 containerd[1904]: time="2025-05-13T23:57:56.008196517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:56.009396 containerd[1904]: time="2025-05-13T23:57:56.008937692Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.260003532s" May 13 23:57:56.009396 containerd[1904]: time="2025-05-13T23:57:56.008973154Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 13 23:57:56.764871 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 13 23:57:58.930388 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:58.931030 systemd[1]: kubelet.service: Consumed 180ms CPU time, 96.3M memory peak. May 13 23:57:58.933619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:58.960985 systemd[1]: Reload requested from client PID 2816 ('systemctl') (unit session-9.scope)... May 13 23:57:58.961179 systemd[1]: Reloading... May 13 23:57:59.105237 zram_generator::config[2857]: No configuration found. May 13 23:57:59.269573 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:57:59.387329 systemd[1]: Reloading finished in 425 ms. May 13 23:57:59.445763 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 23:57:59.445878 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 23:57:59.446194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:59.446274 systemd[1]: kubelet.service: Consumed 125ms CPU time, 83.2M memory peak. May 13 23:57:59.449361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:59.647333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:59.658640 (kubelet)[2925]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:57:59.706799 kubelet[2925]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:57:59.707116 kubelet[2925]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:57:59.707116 kubelet[2925]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:57:59.708357 kubelet[2925]: I0513 23:57:59.708296 2925 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:58:00.221427 kubelet[2925]: I0513 23:58:00.221372 2925 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 23:58:00.221427 kubelet[2925]: I0513 23:58:00.221417 2925 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:58:00.221717 kubelet[2925]: I0513 23:58:00.221681 2925 server.go:927] "Client rotation is on, will bootstrap in background" May 13 23:58:00.253196 kubelet[2925]: I0513 23:58:00.253162 2925 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:58:00.258479 kubelet[2925]: E0513 23:58:00.258445 2925 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.16.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.16.70:6443: connect: connection refused May 13 23:58:00.278672 kubelet[2925]: I0513 23:58:00.278645 2925 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:58:00.281645 kubelet[2925]: I0513 23:58:00.281564 2925 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:58:00.283328 kubelet[2925]: I0513 23:58:00.281625 2925 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-70","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 23:58:00.283528 kubelet[2925]: I0513 23:58:00.283338 2925 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:58:00.283528 kubelet[2925]: I0513 23:58:00.283354 2925 container_manager_linux.go:301] "Creating device plugin manager" May 13 23:58:00.283528 kubelet[2925]: I0513 23:58:00.283487 2925 state_mem.go:36] "Initialized new in-memory state store" May 13 23:58:00.284705 kubelet[2925]: I0513 23:58:00.284624 2925 kubelet.go:400] "Attempting to sync node with API server" May 13 23:58:00.284705 kubelet[2925]: I0513 23:58:00.284648 2925 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:58:00.285886 kubelet[2925]: W0513 23:58:00.285063 2925 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-70&limit=500&resourceVersion=0": dial tcp 172.31.16.70:6443: connect: connection refused May 13 23:58:00.285886 kubelet[2925]: E0513 23:58:00.285672 2925 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-70&limit=500&resourceVersion=0": dial tcp 172.31.16.70:6443: connect: connection refused May 13 23:58:00.288225 kubelet[2925]: I0513 23:58:00.287735 2925 kubelet.go:312] "Adding apiserver pod source" May 13 23:58:00.288225 kubelet[2925]: I0513 23:58:00.287772 2925 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:58:00.294468 kubelet[2925]: I0513 23:58:00.294321 2925 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:58:00.296810 kubelet[2925]: I0513 23:58:00.296116 2925 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:58:00.296810 kubelet[2925]: W0513 23:58:00.296176 2925 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:58:00.296810 kubelet[2925]: I0513 23:58:00.296685 2925 server.go:1264] "Started kubelet" May 13 23:58:00.296929 kubelet[2925]: W0513 23:58:00.296823 2925 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.70:6443: connect: connection refused May 13 23:58:00.296929 kubelet[2925]: E0513 23:58:00.296866 2925 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.70:6443: connect: connection refused May 13 23:58:00.302774 kubelet[2925]: I0513 23:58:00.302664 2925 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:58:00.304743 kubelet[2925]: I0513 23:58:00.304677 2925 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:58:00.306231 kubelet[2925]: I0513 23:58:00.305070 2925 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:58:00.306231 kubelet[2925]: E0513 23:58:00.305276 2925 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.70:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.70:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-70.183f3b925f82082f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-70,UID:ip-172-31-16-70,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-70,},FirstTimestamp:2025-05-13 23:58:00.296663087 +0000 UTC m=+0.633608205,LastTimestamp:2025-05-13 23:58:00.296663087 +0000 UTC m=+0.633608205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-70,}" May 13 23:58:00.311226 kubelet[2925]: I0513 23:58:00.310651 2925 server.go:455] "Adding debug handlers to kubelet server" May 13 23:58:00.311342 kubelet[2925]: I0513 23:58:00.311324 2925 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:58:00.314384 kubelet[2925]: E0513 23:58:00.314353 2925 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-16-70\" not found" May 13 23:58:00.314544 kubelet[2925]: I0513 23:58:00.314533 2925 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 23:58:00.314712 kubelet[2925]: I0513 23:58:00.314701 2925 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:58:00.316594 kubelet[2925]: I0513 23:58:00.316576 2925 reconciler.go:26] "Reconciler: start to sync state" May 13 23:58:00.317343 kubelet[2925]: W0513 23:58:00.317278 2925 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.70:6443: connect: connection refused May 13 23:58:00.317510 kubelet[2925]: E0513 23:58:00.317496 2925 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.70:6443: connect: connection refused May 13 23:58:00.320533 kubelet[2925]: E0513 23:58:00.320493 2925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-70?timeout=10s\": dial tcp 172.31.16.70:6443: connect: connection refused" interval="200ms" May 13 23:58:00.324227 kubelet[2925]: E0513 23:58:00.324136 2925 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:58:00.325260 kubelet[2925]: I0513 23:58:00.324555 2925 factory.go:221] Registration of the systemd container factory successfully May 13 23:58:00.325260 kubelet[2925]: I0513 23:58:00.324682 2925 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:58:00.326638 kubelet[2925]: I0513 23:58:00.326620 2925 factory.go:221] Registration of the containerd container factory successfully May 13 23:58:00.339716 kubelet[2925]: I0513 23:58:00.339470 2925 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:58:00.341812 kubelet[2925]: I0513 23:58:00.341353 2925 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:58:00.341812 kubelet[2925]: I0513 23:58:00.341388 2925 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:58:00.341812 kubelet[2925]: I0513 23:58:00.341413 2925 kubelet.go:2337] "Starting kubelet main sync loop" May 13 23:58:00.341812 kubelet[2925]: E0513 23:58:00.341464 2925 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:58:00.350414 kubelet[2925]: W0513 23:58:00.350358 2925 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.70:6443: connect: connection refused May 13 23:58:00.350547 kubelet[2925]: E0513 23:58:00.350442 2925 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.70:6443: connect: connection refused May 13 23:58:00.360023 kubelet[2925]: I0513 23:58:00.359866 2925 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:58:00.360023 kubelet[2925]: I0513 23:58:00.359961 2925 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:58:00.360023 kubelet[2925]: I0513 23:58:00.359980 2925 state_mem.go:36] "Initialized new in-memory state store" May 13 23:58:00.364934 kubelet[2925]: I0513 23:58:00.364901 2925 policy_none.go:49] "None policy: Start" May 13 23:58:00.365565 kubelet[2925]: I0513 23:58:00.365545 2925 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:58:00.365644 kubelet[2925]: I0513 23:58:00.365595 2925 state_mem.go:35] "Initializing new in-memory state store" May 13 23:58:00.373905 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:58:00.385813 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:58:00.391762 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:58:00.404283 kubelet[2925]: I0513 23:58:00.404252 2925 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:58:00.404753 kubelet[2925]: I0513 23:58:00.404710 2925 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:58:00.404866 kubelet[2925]: I0513 23:58:00.404851 2925 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:58:00.407339 kubelet[2925]: E0513 23:58:00.407318 2925 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-70\" not found" May 13 23:58:00.416713 kubelet[2925]: I0513 23:58:00.416683 2925 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-70" May 13 23:58:00.417045 kubelet[2925]: E0513 23:58:00.416989 2925 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.70:6443/api/v1/nodes\": dial tcp 172.31.16.70:6443: connect: connection refused" node="ip-172-31-16-70" May 13 23:58:00.442416 kubelet[2925]: I0513 23:58:00.442362 2925 topology_manager.go:215] "Topology Admit Handler" podUID="18b998dfb040266392274fbc421eff51" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-16-70" May 13 23:58:00.444351 kubelet[2925]: I0513 23:58:00.444105 2925 topology_manager.go:215] "Topology Admit Handler" podUID="38faf16b0dac64fb407160cd74fd153b" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-16-70" May 13 23:58:00.445608 kubelet[2925]: I0513 23:58:00.445463 2925 topology_manager.go:215] "Topology Admit Handler" podUID="3455a990361ea896d8d5c6e9dcb602ae" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-16-70" May 13 23:58:00.452130 systemd[1]: Created slice kubepods-burstable-pod18b998dfb040266392274fbc421eff51.slice - libcontainer container kubepods-burstable-pod18b998dfb040266392274fbc421eff51.slice. May 13 23:58:00.471067 systemd[1]: Created slice kubepods-burstable-pod38faf16b0dac64fb407160cd74fd153b.slice - libcontainer container kubepods-burstable-pod38faf16b0dac64fb407160cd74fd153b.slice. May 13 23:58:00.489089 systemd[1]: Created slice kubepods-burstable-pod3455a990361ea896d8d5c6e9dcb602ae.slice - libcontainer container kubepods-burstable-pod3455a990361ea896d8d5c6e9dcb602ae.slice. May 13 23:58:00.517611 kubelet[2925]: I0513 23:58:00.517572 2925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3455a990361ea896d8d5c6e9dcb602ae-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-70\" (UID: \"3455a990361ea896d8d5c6e9dcb602ae\") " pod="kube-system/kube-scheduler-ip-172-31-16-70" May 13 23:58:00.517611 kubelet[2925]: I0513 23:58:00.517614 2925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/18b998dfb040266392274fbc421eff51-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-70\" (UID: \"18b998dfb040266392274fbc421eff51\") " pod="kube-system/kube-apiserver-ip-172-31-16-70" May 13 23:58:00.517901 kubelet[2925]: I0513 23:58:00.517635 2925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38faf16b0dac64fb407160cd74fd153b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-70\" (UID: \"38faf16b0dac64fb407160cd74fd153b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-70" May 13 23:58:00.517901 kubelet[2925]: I0513 23:58:00.517656 2925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38faf16b0dac64fb407160cd74fd153b-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-70\" (UID: \"38faf16b0dac64fb407160cd74fd153b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-70" May 13 23:58:00.517901 kubelet[2925]: I0513 23:58:00.517672 2925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/38faf16b0dac64fb407160cd74fd153b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-70\" (UID: \"38faf16b0dac64fb407160cd74fd153b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-70" May 13 23:58:00.517901 kubelet[2925]: I0513 23:58:00.517690 2925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38faf16b0dac64fb407160cd74fd153b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-70\" (UID: \"38faf16b0dac64fb407160cd74fd153b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-70" May 13 23:58:00.517901 kubelet[2925]: I0513 23:58:00.517704 2925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38faf16b0dac64fb407160cd74fd153b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-70\" (UID: \"38faf16b0dac64fb407160cd74fd153b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-70" May 13 23:58:00.518023 kubelet[2925]: I0513 23:58:00.517724 2925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/18b998dfb040266392274fbc421eff51-ca-certs\") pod \"kube-apiserver-ip-172-31-16-70\" (UID: \"18b998dfb040266392274fbc421eff51\") " pod="kube-system/kube-apiserver-ip-172-31-16-70" May 13 23:58:00.518023 kubelet[2925]: I0513 23:58:00.517740 2925 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/18b998dfb040266392274fbc421eff51-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-70\" (UID: \"18b998dfb040266392274fbc421eff51\") " pod="kube-system/kube-apiserver-ip-172-31-16-70" May 13 23:58:00.520995 kubelet[2925]: E0513 23:58:00.520946 2925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-70?timeout=10s\": dial tcp 172.31.16.70:6443: connect: connection refused" interval="400ms" May 13 23:58:00.618824 kubelet[2925]: I0513 23:58:00.618791 2925 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-70" May 13 23:58:00.619125 kubelet[2925]: E0513 23:58:00.619094 2925 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.70:6443/api/v1/nodes\": dial tcp 172.31.16.70:6443: connect: connection refused" node="ip-172-31-16-70" May 13 23:58:00.770008 containerd[1904]: time="2025-05-13T23:58:00.769822356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-70,Uid:18b998dfb040266392274fbc421eff51,Namespace:kube-system,Attempt:0,}" May 13 23:58:00.786873 containerd[1904]: time="2025-05-13T23:58:00.786825422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-70,Uid:38faf16b0dac64fb407160cd74fd153b,Namespace:kube-system,Attempt:0,}" May 13 23:58:00.793598 containerd[1904]: time="2025-05-13T23:58:00.793560766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-70,Uid:3455a990361ea896d8d5c6e9dcb602ae,Namespace:kube-system,Attempt:0,}" May 13 23:58:00.922059 kubelet[2925]: E0513 23:58:00.922013 2925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-70?timeout=10s\": dial tcp 172.31.16.70:6443: connect: connection refused" interval="800ms" May 13 23:58:01.021734 kubelet[2925]: I0513 23:58:01.021415 2925 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-70" May 13 23:58:01.021946 kubelet[2925]: E0513 23:58:01.021895 2925 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.70:6443/api/v1/nodes\": dial tcp 172.31.16.70:6443: connect: connection refused" node="ip-172-31-16-70" May 13 23:58:01.104513 kubelet[2925]: W0513 23:58:01.104453 2925 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.70:6443: connect: connection refused May 13 23:58:01.104513 kubelet[2925]: E0513 23:58:01.104515 2925 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.70:6443: connect: connection refused May 13 23:58:01.108962 kubelet[2925]: W0513 23:58:01.108872 2925 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-70&limit=500&resourceVersion=0": dial tcp 172.31.16.70:6443: connect: connection refused May 13 23:58:01.108962 kubelet[2925]: E0513 23:58:01.108938 2925 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-70&limit=500&resourceVersion=0": dial tcp 172.31.16.70:6443: connect: connection refused May 13 23:58:01.345558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3449440914.mount: Deactivated successfully. May 13 23:58:01.362697 containerd[1904]: time="2025-05-13T23:58:01.362553509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:58:01.369269 containerd[1904]: time="2025-05-13T23:58:01.369178355Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 13 23:58:01.385231 containerd[1904]: time="2025-05-13T23:58:01.383618280Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:58:01.385231 containerd[1904]: time="2025-05-13T23:58:01.384942238Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:58:01.393564 containerd[1904]: time="2025-05-13T23:58:01.393494467Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 13 23:58:01.398558 containerd[1904]: time="2025-05-13T23:58:01.398505743Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:58:01.400686 containerd[1904]: time="2025-05-13T23:58:01.400618761Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 13 23:58:01.402477 containerd[1904]: time="2025-05-13T23:58:01.402408788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:58:01.403482 containerd[1904]: time="2025-05-13T23:58:01.403264064Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 620.96774ms" May 13 23:58:01.408463 containerd[1904]: time="2025-05-13T23:58:01.408382156Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 616.468605ms" May 13 23:58:01.411024 containerd[1904]: time="2025-05-13T23:58:01.410972755Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 609.663968ms" May 13 23:58:01.425906 kubelet[2925]: W0513 23:58:01.423676 2925 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.70:6443: connect: connection refused May 13 23:58:01.425906 kubelet[2925]: E0513 23:58:01.423759 2925 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.70:6443: connect: connection refused May 13 23:58:01.598300 containerd[1904]: time="2025-05-13T23:58:01.596337998Z" level=info msg="connecting to shim e060f7cd094f8a9f2f404d8cf29aa6dbe4ad8ba707e14333d8f156eafcf8e5bb" address="unix:///run/containerd/s/1227867335d12f23d3f799886972f59c73754c330d5cc236eb96245a7ad10737" namespace=k8s.io protocol=ttrpc version=3 May 13 23:58:01.604934 containerd[1904]: time="2025-05-13T23:58:01.604879128Z" level=info msg="connecting to shim c83d294b6215a502271f12a478478c558073548e9e90a1fb2069792c22502edb" address="unix:///run/containerd/s/ffde0d3e0861e36685e035f7f674fcaf40a5a3860e9524d42177d8cf9794235c" namespace=k8s.io protocol=ttrpc version=3 May 13 23:58:01.606415 containerd[1904]: time="2025-05-13T23:58:01.606164473Z" level=info msg="connecting to shim 848bfa66ca2717de93eca804a4db0c0725a12137f237df48722b7caf842a08d8" address="unix:///run/containerd/s/5ef55dcef7d672683f77e669d2dd0694ef7bf9f84bf0ec3804c9a36cec6fced2" namespace=k8s.io protocol=ttrpc version=3 May 13 23:58:01.717575 systemd[1]: Started cri-containerd-848bfa66ca2717de93eca804a4db0c0725a12137f237df48722b7caf842a08d8.scope - libcontainer container 848bfa66ca2717de93eca804a4db0c0725a12137f237df48722b7caf842a08d8. May 13 23:58:01.721367 systemd[1]: Started cri-containerd-c83d294b6215a502271f12a478478c558073548e9e90a1fb2069792c22502edb.scope - libcontainer container c83d294b6215a502271f12a478478c558073548e9e90a1fb2069792c22502edb. May 13 23:58:01.725411 systemd[1]: Started cri-containerd-e060f7cd094f8a9f2f404d8cf29aa6dbe4ad8ba707e14333d8f156eafcf8e5bb.scope - libcontainer container e060f7cd094f8a9f2f404d8cf29aa6dbe4ad8ba707e14333d8f156eafcf8e5bb. May 13 23:58:01.730904 kubelet[2925]: E0513 23:58:01.730855 2925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-70?timeout=10s\": dial tcp 172.31.16.70:6443: connect: connection refused" interval="1.6s" May 13 23:58:01.768130 kubelet[2925]: W0513 23:58:01.767942 2925 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.70:6443: connect: connection refused May 13 23:58:01.771016 kubelet[2925]: E0513 23:58:01.770973 2925 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.70:6443: connect: connection refused May 13 23:58:01.830079 kubelet[2925]: I0513 23:58:01.830032 2925 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-70" May 13 23:58:01.834502 kubelet[2925]: E0513 23:58:01.832319 2925 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.70:6443/api/v1/nodes\": dial tcp 172.31.16.70:6443: connect: connection refused" node="ip-172-31-16-70" May 13 23:58:01.834747 containerd[1904]: time="2025-05-13T23:58:01.834699690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-70,Uid:3455a990361ea896d8d5c6e9dcb602ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"e060f7cd094f8a9f2f404d8cf29aa6dbe4ad8ba707e14333d8f156eafcf8e5bb\"" May 13 23:58:01.841691 containerd[1904]: time="2025-05-13T23:58:01.841648660Z" level=info msg="CreateContainer within sandbox \"e060f7cd094f8a9f2f404d8cf29aa6dbe4ad8ba707e14333d8f156eafcf8e5bb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:58:01.868494 containerd[1904]: time="2025-05-13T23:58:01.868193011Z" level=info msg="Container 4ba9568668707a586736f3c82c2d36312c612192deca488b2a218330e225664e: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:01.871937 containerd[1904]: time="2025-05-13T23:58:01.871893622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-70,Uid:38faf16b0dac64fb407160cd74fd153b,Namespace:kube-system,Attempt:0,} returns sandbox id \"848bfa66ca2717de93eca804a4db0c0725a12137f237df48722b7caf842a08d8\"" May 13 23:58:01.876189 containerd[1904]: time="2025-05-13T23:58:01.876144979Z" level=info msg="CreateContainer within sandbox \"848bfa66ca2717de93eca804a4db0c0725a12137f237df48722b7caf842a08d8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:58:01.890016 containerd[1904]: time="2025-05-13T23:58:01.889979537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-70,Uid:18b998dfb040266392274fbc421eff51,Namespace:kube-system,Attempt:0,} returns sandbox id \"c83d294b6215a502271f12a478478c558073548e9e90a1fb2069792c22502edb\"" May 13 23:58:01.891111 containerd[1904]: time="2025-05-13T23:58:01.891016023Z" level=info msg="CreateContainer within sandbox \"e060f7cd094f8a9f2f404d8cf29aa6dbe4ad8ba707e14333d8f156eafcf8e5bb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4ba9568668707a586736f3c82c2d36312c612192deca488b2a218330e225664e\"" May 13 23:58:01.892502 containerd[1904]: time="2025-05-13T23:58:01.892295021Z" level=info msg="StartContainer for \"4ba9568668707a586736f3c82c2d36312c612192deca488b2a218330e225664e\"" May 13 23:58:01.897386 containerd[1904]: time="2025-05-13T23:58:01.895287769Z" level=info msg="connecting to shim 4ba9568668707a586736f3c82c2d36312c612192deca488b2a218330e225664e" address="unix:///run/containerd/s/1227867335d12f23d3f799886972f59c73754c330d5cc236eb96245a7ad10737" protocol=ttrpc version=3 May 13 23:58:01.900984 containerd[1904]: time="2025-05-13T23:58:01.900929182Z" level=info msg="CreateContainer within sandbox \"c83d294b6215a502271f12a478478c558073548e9e90a1fb2069792c22502edb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:58:01.911333 containerd[1904]: time="2025-05-13T23:58:01.911260785Z" level=info msg="Container 61032fcbccf235a794991b0b066b722ae468317223c35be2e26348d61f35056b: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:01.939694 systemd[1]: Started cri-containerd-4ba9568668707a586736f3c82c2d36312c612192deca488b2a218330e225664e.scope - libcontainer container 4ba9568668707a586736f3c82c2d36312c612192deca488b2a218330e225664e. May 13 23:58:01.946752 containerd[1904]: time="2025-05-13T23:58:01.946691530Z" level=info msg="CreateContainer within sandbox \"848bfa66ca2717de93eca804a4db0c0725a12137f237df48722b7caf842a08d8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"61032fcbccf235a794991b0b066b722ae468317223c35be2e26348d61f35056b\"" May 13 23:58:01.949538 containerd[1904]: time="2025-05-13T23:58:01.949487663Z" level=info msg="StartContainer for \"61032fcbccf235a794991b0b066b722ae468317223c35be2e26348d61f35056b\"" May 13 23:58:01.957155 containerd[1904]: time="2025-05-13T23:58:01.956420943Z" level=info msg="connecting to shim 61032fcbccf235a794991b0b066b722ae468317223c35be2e26348d61f35056b" address="unix:///run/containerd/s/5ef55dcef7d672683f77e669d2dd0694ef7bf9f84bf0ec3804c9a36cec6fced2" protocol=ttrpc version=3 May 13 23:58:01.957641 containerd[1904]: time="2025-05-13T23:58:01.957591321Z" level=info msg="Container 9005654dc1e7ef1dea74b54299ecafa5ba9293332c2cfdbad7bdee56c3e19ac9: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:01.979441 containerd[1904]: time="2025-05-13T23:58:01.979363036Z" level=info msg="CreateContainer within sandbox \"c83d294b6215a502271f12a478478c558073548e9e90a1fb2069792c22502edb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9005654dc1e7ef1dea74b54299ecafa5ba9293332c2cfdbad7bdee56c3e19ac9\"" May 13 23:58:01.983134 containerd[1904]: time="2025-05-13T23:58:01.983041701Z" level=info msg="StartContainer for \"9005654dc1e7ef1dea74b54299ecafa5ba9293332c2cfdbad7bdee56c3e19ac9\"" May 13 23:58:01.985683 containerd[1904]: time="2025-05-13T23:58:01.985604025Z" level=info msg="connecting to shim 9005654dc1e7ef1dea74b54299ecafa5ba9293332c2cfdbad7bdee56c3e19ac9" address="unix:///run/containerd/s/ffde0d3e0861e36685e035f7f674fcaf40a5a3860e9524d42177d8cf9794235c" protocol=ttrpc version=3 May 13 23:58:02.011450 systemd[1]: Started cri-containerd-61032fcbccf235a794991b0b066b722ae468317223c35be2e26348d61f35056b.scope - libcontainer container 61032fcbccf235a794991b0b066b722ae468317223c35be2e26348d61f35056b. May 13 23:58:02.050231 systemd[1]: Started cri-containerd-9005654dc1e7ef1dea74b54299ecafa5ba9293332c2cfdbad7bdee56c3e19ac9.scope - libcontainer container 9005654dc1e7ef1dea74b54299ecafa5ba9293332c2cfdbad7bdee56c3e19ac9. May 13 23:58:02.081349 containerd[1904]: time="2025-05-13T23:58:02.080827267Z" level=info msg="StartContainer for \"4ba9568668707a586736f3c82c2d36312c612192deca488b2a218330e225664e\" returns successfully" May 13 23:58:02.156441 containerd[1904]: time="2025-05-13T23:58:02.155649702Z" level=info msg="StartContainer for \"9005654dc1e7ef1dea74b54299ecafa5ba9293332c2cfdbad7bdee56c3e19ac9\" returns successfully" May 13 23:58:02.171025 containerd[1904]: time="2025-05-13T23:58:02.170965894Z" level=info msg="StartContainer for \"61032fcbccf235a794991b0b066b722ae468317223c35be2e26348d61f35056b\" returns successfully" May 13 23:58:02.403497 kubelet[2925]: E0513 23:58:02.403453 2925 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.16.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.16.70:6443: connect: connection refused May 13 23:58:03.434840 kubelet[2925]: I0513 23:58:03.434446 2925 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-70" May 13 23:58:05.285971 kubelet[2925]: E0513 23:58:05.285860 2925 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-70\" not found" node="ip-172-31-16-70" May 13 23:58:05.295295 kubelet[2925]: I0513 23:58:05.295258 2925 apiserver.go:52] "Watching apiserver" May 13 23:58:05.315217 kubelet[2925]: I0513 23:58:05.315147 2925 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:58:05.368193 kubelet[2925]: I0513 23:58:05.367992 2925 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-16-70" May 13 23:58:07.604435 systemd[1]: Reload requested from client PID 3199 ('systemctl') (unit session-9.scope)... May 13 23:58:07.604460 systemd[1]: Reloading... May 13 23:58:07.723344 zram_generator::config[3242]: No configuration found. May 13 23:58:07.936532 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:58:08.124034 systemd[1]: Reloading finished in 518 ms. May 13 23:58:08.155401 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:58:08.169703 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:58:08.169924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:08.169978 systemd[1]: kubelet.service: Consumed 1.030s CPU time, 111.6M memory peak. May 13 23:58:08.172746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:58:08.420801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:08.427697 (kubelet)[3303]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:58:08.528043 kubelet[3303]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:58:08.528043 kubelet[3303]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:58:08.528043 kubelet[3303]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:58:08.532236 kubelet[3303]: I0513 23:58:08.531618 3303 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:58:08.537471 kubelet[3303]: I0513 23:58:08.537440 3303 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 23:58:08.537471 kubelet[3303]: I0513 23:58:08.537463 3303 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:58:08.537719 kubelet[3303]: I0513 23:58:08.537687 3303 server.go:927] "Client rotation is on, will bootstrap in background" May 13 23:58:08.540902 kubelet[3303]: I0513 23:58:08.540866 3303 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:58:08.542329 kubelet[3303]: I0513 23:58:08.542152 3303 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:58:08.547987 kubelet[3303]: I0513 23:58:08.547849 3303 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:58:08.548115 kubelet[3303]: I0513 23:58:08.548040 3303 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:58:08.548262 kubelet[3303]: I0513 23:58:08.548063 3303 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-70","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 23:58:08.548380 kubelet[3303]: I0513 23:58:08.548268 3303 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:58:08.548380 kubelet[3303]: I0513 23:58:08.548278 3303 container_manager_linux.go:301] "Creating device plugin manager" May 13 23:58:08.552321 kubelet[3303]: I0513 23:58:08.552286 3303 state_mem.go:36] "Initialized new in-memory state store" May 13 23:58:08.554821 kubelet[3303]: I0513 23:58:08.554787 3303 kubelet.go:400] "Attempting to sync node with API server" May 13 23:58:08.554918 kubelet[3303]: I0513 23:58:08.554829 3303 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:58:08.554918 kubelet[3303]: I0513 23:58:08.554855 3303 kubelet.go:312] "Adding apiserver pod source" May 13 23:58:08.554918 kubelet[3303]: I0513 23:58:08.554873 3303 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:58:08.570340 kubelet[3303]: I0513 23:58:08.570303 3303 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:58:08.572534 kubelet[3303]: I0513 23:58:08.572508 3303 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:58:08.584289 kubelet[3303]: I0513 23:58:08.580160 3303 server.go:1264] "Started kubelet" May 13 23:58:08.584289 kubelet[3303]: I0513 23:58:08.580466 3303 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:58:08.584289 kubelet[3303]: I0513 23:58:08.581484 3303 server.go:455] "Adding debug handlers to kubelet server" May 13 23:58:08.584289 kubelet[3303]: I0513 23:58:08.583367 3303 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:58:08.584289 kubelet[3303]: I0513 23:58:08.583558 3303 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:58:08.584824 kubelet[3303]: I0513 23:58:08.584809 3303 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:58:08.593536 kubelet[3303]: E0513 23:58:08.593506 3303 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:58:08.594440 kubelet[3303]: I0513 23:58:08.594422 3303 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 23:58:08.594527 kubelet[3303]: I0513 23:58:08.594516 3303 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:58:08.594638 kubelet[3303]: I0513 23:58:08.594626 3303 reconciler.go:26] "Reconciler: start to sync state" May 13 23:58:08.596537 kubelet[3303]: I0513 23:58:08.596514 3303 factory.go:221] Registration of the systemd container factory successfully May 13 23:58:08.596617 kubelet[3303]: I0513 23:58:08.596589 3303 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:58:08.603130 kubelet[3303]: I0513 23:58:08.602956 3303 factory.go:221] Registration of the containerd container factory successfully May 13 23:58:08.616602 kubelet[3303]: I0513 23:58:08.603445 3303 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:58:08.620403 kubelet[3303]: I0513 23:58:08.620372 3303 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:58:08.620650 kubelet[3303]: I0513 23:58:08.620594 3303 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:58:08.620960 kubelet[3303]: I0513 23:58:08.620944 3303 kubelet.go:2337] "Starting kubelet main sync loop" May 13 23:58:08.621338 kubelet[3303]: E0513 23:58:08.621138 3303 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:58:08.640851 sudo[3324]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 23:58:08.641800 sudo[3324]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 23:58:08.705361 kubelet[3303]: I0513 23:58:08.705141 3303 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:58:08.705361 kubelet[3303]: I0513 23:58:08.705164 3303 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:58:08.705361 kubelet[3303]: I0513 23:58:08.705187 3303 state_mem.go:36] "Initialized new in-memory state store" May 13 23:58:08.705996 kubelet[3303]: I0513 23:58:08.705493 3303 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:58:08.705996 kubelet[3303]: I0513 23:58:08.705523 3303 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:58:08.705996 kubelet[3303]: I0513 23:58:08.705546 3303 policy_none.go:49] "None policy: Start" May 13 23:58:08.708772 kubelet[3303]: I0513 23:58:08.708748 3303 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:58:08.708865 kubelet[3303]: I0513 23:58:08.708783 3303 state_mem.go:35] "Initializing new in-memory state store" May 13 23:58:08.709653 kubelet[3303]: I0513 23:58:08.709075 3303 state_mem.go:75] "Updated machine memory state" May 13 23:58:08.722602 kubelet[3303]: I0513 23:58:08.720999 3303 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:58:08.722602 kubelet[3303]: I0513 23:58:08.721212 3303 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:58:08.727969 kubelet[3303]: I0513 23:58:08.725066 3303 topology_manager.go:215] "Topology Admit Handler" podUID="18b998dfb040266392274fbc421eff51" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-16-70" May 13 23:58:08.727969 kubelet[3303]: I0513 23:58:08.725277 3303 topology_manager.go:215] "Topology Admit Handler" podUID="38faf16b0dac64fb407160cd74fd153b" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-16-70" May 13 23:58:08.727969 kubelet[3303]: I0513 23:58:08.725395 3303 topology_manager.go:215] "Topology Admit Handler" podUID="3455a990361ea896d8d5c6e9dcb602ae" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-16-70" May 13 23:58:08.731231 kubelet[3303]: I0513 23:58:08.728641 3303 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:58:08.847981 kubelet[3303]: I0513 23:58:08.847924 3303 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-70" May 13 23:58:08.858195 kubelet[3303]: I0513 23:58:08.858162 3303 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-16-70" May 13 23:58:08.858398 kubelet[3303]: I0513 23:58:08.858274 3303 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-16-70" May 13 23:58:08.895543 kubelet[3303]: I0513 23:58:08.895503 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38faf16b0dac64fb407160cd74fd153b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-70\" (UID: \"38faf16b0dac64fb407160cd74fd153b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-70" May 13 23:58:08.895706 kubelet[3303]: I0513 23:58:08.895552 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38faf16b0dac64fb407160cd74fd153b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-70\" (UID: \"38faf16b0dac64fb407160cd74fd153b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-70" May 13 23:58:08.895706 kubelet[3303]: I0513 23:58:08.895581 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3455a990361ea896d8d5c6e9dcb602ae-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-70\" (UID: \"3455a990361ea896d8d5c6e9dcb602ae\") " pod="kube-system/kube-scheduler-ip-172-31-16-70" May 13 23:58:08.895706 kubelet[3303]: I0513 23:58:08.895604 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/18b998dfb040266392274fbc421eff51-ca-certs\") pod \"kube-apiserver-ip-172-31-16-70\" (UID: \"18b998dfb040266392274fbc421eff51\") " pod="kube-system/kube-apiserver-ip-172-31-16-70" May 13 23:58:08.895856 kubelet[3303]: I0513 23:58:08.895737 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/18b998dfb040266392274fbc421eff51-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-70\" (UID: \"18b998dfb040266392274fbc421eff51\") " pod="kube-system/kube-apiserver-ip-172-31-16-70" May 13 23:58:08.895856 kubelet[3303]: I0513 23:58:08.895773 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/18b998dfb040266392274fbc421eff51-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-70\" (UID: \"18b998dfb040266392274fbc421eff51\") " pod="kube-system/kube-apiserver-ip-172-31-16-70" May 13 23:58:08.895856 kubelet[3303]: I0513 23:58:08.895801 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38faf16b0dac64fb407160cd74fd153b-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-70\" (UID: \"38faf16b0dac64fb407160cd74fd153b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-70" May 13 23:58:08.895856 kubelet[3303]: I0513 23:58:08.895838 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/38faf16b0dac64fb407160cd74fd153b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-70\" (UID: \"38faf16b0dac64fb407160cd74fd153b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-70" May 13 23:58:08.896016 kubelet[3303]: I0513 23:58:08.895863 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38faf16b0dac64fb407160cd74fd153b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-70\" (UID: \"38faf16b0dac64fb407160cd74fd153b\") " pod="kube-system/kube-controller-manager-ip-172-31-16-70" May 13 23:58:09.336918 sudo[3324]: pam_unix(sudo:session): session closed for user root May 13 23:58:09.559001 kubelet[3303]: I0513 23:58:09.558955 3303 apiserver.go:52] "Watching apiserver" May 13 23:58:09.595762 kubelet[3303]: I0513 23:58:09.595636 3303 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:58:09.715687 kubelet[3303]: I0513 23:58:09.715597 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-70" podStartSLOduration=1.715575855 podStartE2EDuration="1.715575855s" podCreationTimestamp="2025-05-13 23:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:58:09.697627371 +0000 UTC m=+1.240672193" watchObservedRunningTime="2025-05-13 23:58:09.715575855 +0000 UTC m=+1.258620674" May 13 23:58:09.715906 kubelet[3303]: I0513 23:58:09.715769 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-70" podStartSLOduration=1.7157609969999998 podStartE2EDuration="1.715760997s" podCreationTimestamp="2025-05-13 23:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:58:09.713613641 +0000 UTC m=+1.256658462" watchObservedRunningTime="2025-05-13 23:58:09.715760997 +0000 UTC m=+1.258805817" May 13 23:58:09.741225 kubelet[3303]: I0513 23:58:09.739712 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-70" podStartSLOduration=1.739695531 podStartE2EDuration="1.739695531s" podCreationTimestamp="2025-05-13 23:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:58:09.727321404 +0000 UTC m=+1.270366222" watchObservedRunningTime="2025-05-13 23:58:09.739695531 +0000 UTC m=+1.282740343" May 13 23:58:10.915491 sudo[2269]: pam_unix(sudo:session): session closed for user root May 13 23:58:10.937657 sshd[2268]: Connection closed by 147.75.109.163 port 34566 May 13 23:58:10.938729 sshd-session[2266]: pam_unix(sshd:session): session closed for user core May 13 23:58:10.941858 systemd[1]: sshd@8-172.31.16.70:22-147.75.109.163:34566.service: Deactivated successfully. May 13 23:58:10.943791 systemd[1]: session-9.scope: Deactivated successfully. May 13 23:58:10.944127 systemd[1]: session-9.scope: Consumed 4.964s CPU time, 225.1M memory peak. May 13 23:58:10.946127 systemd-logind[1892]: Session 9 logged out. Waiting for processes to exit. May 13 23:58:10.947496 systemd-logind[1892]: Removed session 9. May 13 23:58:11.774697 update_engine[1894]: I20250513 23:58:11.774617 1894 update_attempter.cc:509] Updating boot flags... May 13 23:58:11.837051 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3388) May 13 23:58:11.983224 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3387) May 13 23:58:12.145238 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3387) May 13 23:58:22.296597 kubelet[3303]: I0513 23:58:22.296392 3303 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:58:22.298650 containerd[1904]: time="2025-05-13T23:58:22.298609996Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:58:22.299359 kubelet[3303]: I0513 23:58:22.298908 3303 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:58:22.351875 kubelet[3303]: I0513 23:58:22.351100 3303 topology_manager.go:215] "Topology Admit Handler" podUID="a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" podNamespace="kube-system" podName="cilium-fpppb" May 13 23:58:22.351875 kubelet[3303]: I0513 23:58:22.351358 3303 topology_manager.go:215] "Topology Admit Handler" podUID="fc002931-a067-4a93-9ddc-f9288ecd6f1f" podNamespace="kube-system" podName="kube-proxy-kgvrz" May 13 23:58:22.353432 kubelet[3303]: I0513 23:58:22.353395 3303 topology_manager.go:215] "Topology Admit Handler" podUID="7caa7b57-cb4d-4a81-aecb-798953548156" podNamespace="kube-system" podName="cilium-operator-599987898-plzvz" May 13 23:58:22.368242 systemd[1]: Created slice kubepods-besteffort-podfc002931_a067_4a93_9ddc_f9288ecd6f1f.slice - libcontainer container kubepods-besteffort-podfc002931_a067_4a93_9ddc_f9288ecd6f1f.slice. May 13 23:58:22.387538 systemd[1]: Created slice kubepods-burstable-poda9241d94_e91e_459f_a5a5_a9f03cd3ed4d.slice - libcontainer container kubepods-burstable-poda9241d94_e91e_459f_a5a5_a9f03cd3ed4d.slice. May 13 23:58:22.396480 systemd[1]: Created slice kubepods-besteffort-pod7caa7b57_cb4d_4a81_aecb_798953548156.slice - libcontainer container kubepods-besteffort-pod7caa7b57_cb4d_4a81_aecb_798953548156.slice. May 13 23:58:22.403198 kubelet[3303]: I0513 23:58:22.403153 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fc002931-a067-4a93-9ddc-f9288ecd6f1f-kube-proxy\") pod \"kube-proxy-kgvrz\" (UID: \"fc002931-a067-4a93-9ddc-f9288ecd6f1f\") " pod="kube-system/kube-proxy-kgvrz" May 13 23:58:22.403198 kubelet[3303]: I0513 23:58:22.403209 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-cni-path\") pod \"cilium-fpppb\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " pod="kube-system/cilium-fpppb" May 13 23:58:22.403378 kubelet[3303]: I0513 23:58:22.403229 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-host-proc-sys-kernel\") pod \"cilium-fpppb\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " pod="kube-system/cilium-fpppb" May 13 23:58:22.403378 kubelet[3303]: I0513 23:58:22.403244 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swhs7\" (UniqueName: \"kubernetes.io/projected/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-kube-api-access-swhs7\") pod \"cilium-fpppb\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " pod="kube-system/cilium-fpppb" May 13 23:58:22.403378 kubelet[3303]: I0513 23:58:22.403263 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-clustermesh-secrets\") pod \"cilium-fpppb\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " pod="kube-system/cilium-fpppb" May 13 23:58:22.403378 kubelet[3303]: I0513 23:58:22.403279 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc002931-a067-4a93-9ddc-f9288ecd6f1f-xtables-lock\") pod \"kube-proxy-kgvrz\" (UID: \"fc002931-a067-4a93-9ddc-f9288ecd6f1f\") " pod="kube-system/kube-proxy-kgvrz" May 13 23:58:22.403378 kubelet[3303]: I0513 23:58:22.403292 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc002931-a067-4a93-9ddc-f9288ecd6f1f-lib-modules\") pod \"kube-proxy-kgvrz\" (UID: \"fc002931-a067-4a93-9ddc-f9288ecd6f1f\") " pod="kube-system/kube-proxy-kgvrz" May 13 23:58:22.403528 kubelet[3303]: I0513 23:58:22.403311 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pdhr\" (UniqueName: \"kubernetes.io/projected/fc002931-a067-4a93-9ddc-f9288ecd6f1f-kube-api-access-7pdhr\") pod \"kube-proxy-kgvrz\" (UID: \"fc002931-a067-4a93-9ddc-f9288ecd6f1f\") " pod="kube-system/kube-proxy-kgvrz" May 13 23:58:22.403528 kubelet[3303]: I0513 23:58:22.403327 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7caa7b57-cb4d-4a81-aecb-798953548156-cilium-config-path\") pod \"cilium-operator-599987898-plzvz\" (UID: \"7caa7b57-cb4d-4a81-aecb-798953548156\") " pod="kube-system/cilium-operator-599987898-plzvz" May 13 23:58:22.403528 kubelet[3303]: I0513 23:58:22.403344 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-lib-modules\") pod \"cilium-fpppb\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " pod="kube-system/cilium-fpppb" May 13 23:58:22.403528 kubelet[3303]: I0513 23:58:22.403357 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-xtables-lock\") pod \"cilium-fpppb\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " pod="kube-system/cilium-fpppb" May 13 23:58:22.403528 kubelet[3303]: I0513 23:58:22.403372 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-cilium-run\") pod \"cilium-fpppb\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " pod="kube-system/cilium-fpppb" May 13 23:58:22.403656 kubelet[3303]: I0513 23:58:22.403388 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-etc-cni-netd\") pod \"cilium-fpppb\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " pod="kube-system/cilium-fpppb" May 13 23:58:22.403656 kubelet[3303]: I0513 23:58:22.403405 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-hubble-tls\") pod \"cilium-fpppb\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " pod="kube-system/cilium-fpppb" May 13 23:58:22.403656 kubelet[3303]: I0513 23:58:22.403420 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-bpf-maps\") pod \"cilium-fpppb\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " pod="kube-system/cilium-fpppb" May 13 23:58:22.403656 kubelet[3303]: I0513 23:58:22.403434 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-hostproc\") pod \"cilium-fpppb\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " pod="kube-system/cilium-fpppb" May 13 23:58:22.403656 kubelet[3303]: I0513 23:58:22.403450 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-cilium-config-path\") pod \"cilium-fpppb\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " pod="kube-system/cilium-fpppb" May 13 23:58:22.403656 kubelet[3303]: I0513 23:58:22.403467 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-host-proc-sys-net\") pod \"cilium-fpppb\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " pod="kube-system/cilium-fpppb" May 13 23:58:22.404270 kubelet[3303]: I0513 23:58:22.403481 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-cilium-cgroup\") pod \"cilium-fpppb\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " pod="kube-system/cilium-fpppb" May 13 23:58:22.404270 kubelet[3303]: I0513 23:58:22.403505 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6h48\" (UniqueName: \"kubernetes.io/projected/7caa7b57-cb4d-4a81-aecb-798953548156-kube-api-access-f6h48\") pod \"cilium-operator-599987898-plzvz\" (UID: \"7caa7b57-cb4d-4a81-aecb-798953548156\") " pod="kube-system/cilium-operator-599987898-plzvz" May 13 23:58:22.683153 containerd[1904]: time="2025-05-13T23:58:22.683108955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kgvrz,Uid:fc002931-a067-4a93-9ddc-f9288ecd6f1f,Namespace:kube-system,Attempt:0,}" May 13 23:58:22.693280 containerd[1904]: time="2025-05-13T23:58:22.693225453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fpppb,Uid:a9241d94-e91e-459f-a5a5-a9f03cd3ed4d,Namespace:kube-system,Attempt:0,}" May 13 23:58:22.699709 containerd[1904]: time="2025-05-13T23:58:22.699649871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-plzvz,Uid:7caa7b57-cb4d-4a81-aecb-798953548156,Namespace:kube-system,Attempt:0,}" May 13 23:58:22.753188 containerd[1904]: time="2025-05-13T23:58:22.753126067Z" level=info msg="connecting to shim bfd3d2de1d80615ec18dc8f94de56266c66e73a0d0276e213f1fb4325204e7f9" address="unix:///run/containerd/s/9d65190768c3fad08ec65b6b79973ff5b34656a66fa460976f81a23b787eedb3" namespace=k8s.io protocol=ttrpc version=3 May 13 23:58:22.791534 containerd[1904]: time="2025-05-13T23:58:22.791477379Z" level=info msg="connecting to shim 3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd" address="unix:///run/containerd/s/02016a49f6953922492957db7cd9c90a6019eb7a0a06d593e3d60dd4b7b17f0d" namespace=k8s.io protocol=ttrpc version=3 May 13 23:58:22.795895 containerd[1904]: time="2025-05-13T23:58:22.795278021Z" level=info msg="connecting to shim f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3" address="unix:///run/containerd/s/029e98e9a54f1c0f872416d63f195c7c2ff545dc85d94e993fc2e5c9dab6f4ad" namespace=k8s.io protocol=ttrpc version=3 May 13 23:58:22.822526 systemd[1]: Started cri-containerd-bfd3d2de1d80615ec18dc8f94de56266c66e73a0d0276e213f1fb4325204e7f9.scope - libcontainer container bfd3d2de1d80615ec18dc8f94de56266c66e73a0d0276e213f1fb4325204e7f9. May 13 23:58:22.867428 systemd[1]: Started cri-containerd-3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd.scope - libcontainer container 3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd. May 13 23:58:22.869627 systemd[1]: Started cri-containerd-f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3.scope - libcontainer container f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3. May 13 23:58:22.916688 containerd[1904]: time="2025-05-13T23:58:22.916330212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kgvrz,Uid:fc002931-a067-4a93-9ddc-f9288ecd6f1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfd3d2de1d80615ec18dc8f94de56266c66e73a0d0276e213f1fb4325204e7f9\"" May 13 23:58:22.924565 containerd[1904]: time="2025-05-13T23:58:22.924522119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fpppb,Uid:a9241d94-e91e-459f-a5a5-a9f03cd3ed4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\"" May 13 23:58:22.927190 containerd[1904]: time="2025-05-13T23:58:22.926702922Z" level=info msg="CreateContainer within sandbox \"bfd3d2de1d80615ec18dc8f94de56266c66e73a0d0276e213f1fb4325204e7f9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:58:22.935410 containerd[1904]: time="2025-05-13T23:58:22.934903522Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 23:58:22.970738 containerd[1904]: time="2025-05-13T23:58:22.970554620Z" level=info msg="Container 762927962f5ddbf89dc34c90b7413296a2919eadbcf64dcf6ec779658bdea92d: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:22.980099 containerd[1904]: time="2025-05-13T23:58:22.980045104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-plzvz,Uid:7caa7b57-cb4d-4a81-aecb-798953548156,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd\"" May 13 23:58:22.991540 containerd[1904]: time="2025-05-13T23:58:22.991505172Z" level=info msg="CreateContainer within sandbox \"bfd3d2de1d80615ec18dc8f94de56266c66e73a0d0276e213f1fb4325204e7f9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"762927962f5ddbf89dc34c90b7413296a2919eadbcf64dcf6ec779658bdea92d\"" May 13 23:58:22.998997 containerd[1904]: time="2025-05-13T23:58:22.998931783Z" level=info msg="StartContainer for \"762927962f5ddbf89dc34c90b7413296a2919eadbcf64dcf6ec779658bdea92d\"" May 13 23:58:23.001272 containerd[1904]: time="2025-05-13T23:58:23.000974430Z" level=info msg="connecting to shim 762927962f5ddbf89dc34c90b7413296a2919eadbcf64dcf6ec779658bdea92d" address="unix:///run/containerd/s/9d65190768c3fad08ec65b6b79973ff5b34656a66fa460976f81a23b787eedb3" protocol=ttrpc version=3 May 13 23:58:23.027473 systemd[1]: Started cri-containerd-762927962f5ddbf89dc34c90b7413296a2919eadbcf64dcf6ec779658bdea92d.scope - libcontainer container 762927962f5ddbf89dc34c90b7413296a2919eadbcf64dcf6ec779658bdea92d. May 13 23:58:23.094362 containerd[1904]: time="2025-05-13T23:58:23.094319267Z" level=info msg="StartContainer for \"762927962f5ddbf89dc34c90b7413296a2919eadbcf64dcf6ec779658bdea92d\" returns successfully" May 13 23:58:23.805598 kubelet[3303]: I0513 23:58:23.790283 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kgvrz" podStartSLOduration=1.79026044 podStartE2EDuration="1.79026044s" podCreationTimestamp="2025-05-13 23:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:58:23.789835933 +0000 UTC m=+15.332880777" watchObservedRunningTime="2025-05-13 23:58:23.79026044 +0000 UTC m=+15.333305263" May 13 23:58:32.323712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3027867448.mount: Deactivated successfully. May 13 23:58:34.880403 containerd[1904]: time="2025-05-13T23:58:34.880336617Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:34.882383 containerd[1904]: time="2025-05-13T23:58:34.882299849Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 13 23:58:34.884545 containerd[1904]: time="2025-05-13T23:58:34.884467315Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:34.886673 containerd[1904]: time="2025-05-13T23:58:34.886617175Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.951672135s" May 13 23:58:34.886970 containerd[1904]: time="2025-05-13T23:58:34.886676671Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 23:58:34.888557 containerd[1904]: time="2025-05-13T23:58:34.888502134Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 23:58:34.890420 containerd[1904]: time="2025-05-13T23:58:34.889683292Z" level=info msg="CreateContainer within sandbox \"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 23:58:34.932352 containerd[1904]: time="2025-05-13T23:58:34.932188190Z" level=info msg="Container e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:34.974898 containerd[1904]: time="2025-05-13T23:58:34.974841567Z" level=info msg="CreateContainer within sandbox \"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38\"" May 13 23:58:34.975472 containerd[1904]: time="2025-05-13T23:58:34.975438984Z" level=info msg="StartContainer for \"e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38\"" May 13 23:58:34.976560 containerd[1904]: time="2025-05-13T23:58:34.976526241Z" level=info msg="connecting to shim e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38" address="unix:///run/containerd/s/029e98e9a54f1c0f872416d63f195c7c2ff545dc85d94e993fc2e5c9dab6f4ad" protocol=ttrpc version=3 May 13 23:58:35.078549 systemd[1]: Started cri-containerd-e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38.scope - libcontainer container e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38. May 13 23:58:35.159005 containerd[1904]: time="2025-05-13T23:58:35.158763376Z" level=info msg="StartContainer for \"e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38\" returns successfully" May 13 23:58:35.169995 systemd[1]: cri-containerd-e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38.scope: Deactivated successfully. May 13 23:58:35.261416 containerd[1904]: time="2025-05-13T23:58:35.260031454Z" level=info msg="received exit event container_id:\"e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38\" id:\"e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38\" pid:3978 exited_at:{seconds:1747180715 nanos:179587817}" May 13 23:58:35.270474 containerd[1904]: time="2025-05-13T23:58:35.269559428Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38\" id:\"e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38\" pid:3978 exited_at:{seconds:1747180715 nanos:179587817}" May 13 23:58:35.316108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38-rootfs.mount: Deactivated successfully. May 13 23:58:35.759540 containerd[1904]: time="2025-05-13T23:58:35.758156184Z" level=info msg="CreateContainer within sandbox \"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 23:58:35.774317 containerd[1904]: time="2025-05-13T23:58:35.773976023Z" level=info msg="Container 3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:35.784660 containerd[1904]: time="2025-05-13T23:58:35.784576323Z" level=info msg="CreateContainer within sandbox \"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7\"" May 13 23:58:35.786290 containerd[1904]: time="2025-05-13T23:58:35.785322975Z" level=info msg="StartContainer for \"3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7\"" May 13 23:58:35.786290 containerd[1904]: time="2025-05-13T23:58:35.786063122Z" level=info msg="connecting to shim 3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7" address="unix:///run/containerd/s/029e98e9a54f1c0f872416d63f195c7c2ff545dc85d94e993fc2e5c9dab6f4ad" protocol=ttrpc version=3 May 13 23:58:35.811644 systemd[1]: Started cri-containerd-3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7.scope - libcontainer container 3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7. May 13 23:58:35.848088 containerd[1904]: time="2025-05-13T23:58:35.847952851Z" level=info msg="StartContainer for \"3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7\" returns successfully" May 13 23:58:35.862084 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:58:35.862512 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:58:35.863025 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 23:58:35.865827 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:58:35.867003 systemd[1]: cri-containerd-3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7.scope: Deactivated successfully. May 13 23:58:35.870813 containerd[1904]: time="2025-05-13T23:58:35.870705297Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7\" id:\"3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7\" pid:4024 exited_at:{seconds:1747180715 nanos:867467334}" May 13 23:58:35.871147 containerd[1904]: time="2025-05-13T23:58:35.870905029Z" level=info msg="received exit event container_id:\"3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7\" id:\"3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7\" pid:4024 exited_at:{seconds:1747180715 nanos:867467334}" May 13 23:58:35.905683 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:58:36.659947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount249408120.mount: Deactivated successfully. May 13 23:58:36.764905 containerd[1904]: time="2025-05-13T23:58:36.764865929Z" level=info msg="CreateContainer within sandbox \"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 23:58:36.786231 containerd[1904]: time="2025-05-13T23:58:36.782767914Z" level=info msg="Container b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:36.788265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3134104531.mount: Deactivated successfully. May 13 23:58:36.797078 containerd[1904]: time="2025-05-13T23:58:36.797042734Z" level=info msg="CreateContainer within sandbox \"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5\"" May 13 23:58:36.797961 containerd[1904]: time="2025-05-13T23:58:36.797931437Z" level=info msg="StartContainer for \"b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5\"" May 13 23:58:36.799220 containerd[1904]: time="2025-05-13T23:58:36.799184129Z" level=info msg="connecting to shim b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5" address="unix:///run/containerd/s/029e98e9a54f1c0f872416d63f195c7c2ff545dc85d94e993fc2e5c9dab6f4ad" protocol=ttrpc version=3 May 13 23:58:36.823406 systemd[1]: Started cri-containerd-b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5.scope - libcontainer container b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5. May 13 23:58:36.861614 systemd[1]: cri-containerd-b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5.scope: Deactivated successfully. May 13 23:58:36.864015 containerd[1904]: time="2025-05-13T23:58:36.863877921Z" level=info msg="received exit event container_id:\"b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5\" id:\"b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5\" pid:4074 exited_at:{seconds:1747180716 nanos:863487718}" May 13 23:58:36.864214 containerd[1904]: time="2025-05-13T23:58:36.864180705Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5\" id:\"b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5\" pid:4074 exited_at:{seconds:1747180716 nanos:863487718}" May 13 23:58:36.899080 containerd[1904]: time="2025-05-13T23:58:36.899037191Z" level=info msg="StartContainer for \"b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5\" returns successfully" May 13 23:58:37.769261 containerd[1904]: time="2025-05-13T23:58:37.768426501Z" level=info msg="CreateContainer within sandbox \"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 23:58:37.790177 containerd[1904]: time="2025-05-13T23:58:37.788693647Z" level=info msg="Container 472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:37.810045 containerd[1904]: time="2025-05-13T23:58:37.809994150Z" level=info msg="CreateContainer within sandbox \"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c\"" May 13 23:58:37.811884 containerd[1904]: time="2025-05-13T23:58:37.810900312Z" level=info msg="StartContainer for \"472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c\"" May 13 23:58:37.811884 containerd[1904]: time="2025-05-13T23:58:37.811764544Z" level=info msg="connecting to shim 472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c" address="unix:///run/containerd/s/029e98e9a54f1c0f872416d63f195c7c2ff545dc85d94e993fc2e5c9dab6f4ad" protocol=ttrpc version=3 May 13 23:58:37.834403 systemd[1]: Started cri-containerd-472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c.scope - libcontainer container 472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c. May 13 23:58:37.865617 systemd[1]: cri-containerd-472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c.scope: Deactivated successfully. May 13 23:58:37.868259 containerd[1904]: time="2025-05-13T23:58:37.868087772Z" level=info msg="TaskExit event in podsandbox handler container_id:\"472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c\" id:\"472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c\" pid:4117 exited_at:{seconds:1747180717 nanos:867834410}" May 13 23:58:37.869499 containerd[1904]: time="2025-05-13T23:58:37.869334923Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9241d94_e91e_459f_a5a5_a9f03cd3ed4d.slice/cri-containerd-472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c.scope/memory.events\": no such file or directory" May 13 23:58:37.869937 containerd[1904]: time="2025-05-13T23:58:37.869824257Z" level=info msg="received exit event container_id:\"472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c\" id:\"472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c\" pid:4117 exited_at:{seconds:1747180717 nanos:867834410}" May 13 23:58:37.881236 containerd[1904]: time="2025-05-13T23:58:37.881177679Z" level=info msg="StartContainer for \"472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c\" returns successfully" May 13 23:58:37.899793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c-rootfs.mount: Deactivated successfully. May 13 23:58:38.783566 containerd[1904]: time="2025-05-13T23:58:38.783522151Z" level=info msg="CreateContainer within sandbox \"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 23:58:38.819241 containerd[1904]: time="2025-05-13T23:58:38.819157606Z" level=info msg="Container 74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:38.839558 containerd[1904]: time="2025-05-13T23:58:38.839508997Z" level=info msg="CreateContainer within sandbox \"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c\"" May 13 23:58:38.844567 containerd[1904]: time="2025-05-13T23:58:38.844358281Z" level=info msg="StartContainer for \"74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c\"" May 13 23:58:38.848684 containerd[1904]: time="2025-05-13T23:58:38.848646032Z" level=info msg="connecting to shim 74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c" address="unix:///run/containerd/s/029e98e9a54f1c0f872416d63f195c7c2ff545dc85d94e993fc2e5c9dab6f4ad" protocol=ttrpc version=3 May 13 23:58:38.907569 systemd[1]: Started cri-containerd-74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c.scope - libcontainer container 74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c. May 13 23:58:38.997235 containerd[1904]: time="2025-05-13T23:58:38.995289845Z" level=info msg="StartContainer for \"74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c\" returns successfully" May 13 23:58:39.154979 containerd[1904]: time="2025-05-13T23:58:39.153693129Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:39.167393 containerd[1904]: time="2025-05-13T23:58:39.166921097Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 13 23:58:39.170317 containerd[1904]: time="2025-05-13T23:58:39.170259541Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:39.172398 containerd[1904]: time="2025-05-13T23:58:39.172355488Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.283802835s" May 13 23:58:39.172567 containerd[1904]: time="2025-05-13T23:58:39.172544427Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 23:58:39.182264 containerd[1904]: time="2025-05-13T23:58:39.182193922Z" level=info msg="CreateContainer within sandbox \"3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 23:58:39.190307 containerd[1904]: time="2025-05-13T23:58:39.189239278Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c\" id:\"96b68c891f5d92f4225ca2c7fb80aa65b3a4bfee53935e25431534ee93480877\" pid:4191 exited_at:{seconds:1747180719 nanos:180596157}" May 13 23:58:39.223535 kubelet[3303]: I0513 23:58:39.223511 3303 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 23:58:39.235583 containerd[1904]: time="2025-05-13T23:58:39.235447347Z" level=info msg="Container 727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:39.236755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount958018548.mount: Deactivated successfully. May 13 23:58:39.257330 kubelet[3303]: I0513 23:58:39.257286 3303 topology_manager.go:215] "Topology Admit Handler" podUID="c8188e27-983c-42f9-9285-eb0e4650dfd5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-567d8" May 13 23:58:39.260384 kubelet[3303]: I0513 23:58:39.260097 3303 topology_manager.go:215] "Topology Admit Handler" podUID="c39ff2a7-1c21-4a11-9f19-4525719a9d74" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gmj8m" May 13 23:58:39.264503 containerd[1904]: time="2025-05-13T23:58:39.264455987Z" level=info msg="CreateContainer within sandbox \"3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd\"" May 13 23:58:39.275587 containerd[1904]: time="2025-05-13T23:58:39.275431619Z" level=info msg="StartContainer for \"727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd\"" May 13 23:58:39.277817 containerd[1904]: time="2025-05-13T23:58:39.277767612Z" level=info msg="connecting to shim 727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd" address="unix:///run/containerd/s/02016a49f6953922492957db7cd9c90a6019eb7a0a06d593e3d60dd4b7b17f0d" protocol=ttrpc version=3 May 13 23:58:39.284030 systemd[1]: Created slice kubepods-burstable-podc8188e27_983c_42f9_9285_eb0e4650dfd5.slice - libcontainer container kubepods-burstable-podc8188e27_983c_42f9_9285_eb0e4650dfd5.slice. May 13 23:58:39.298082 systemd[1]: Created slice kubepods-burstable-podc39ff2a7_1c21_4a11_9f19_4525719a9d74.slice - libcontainer container kubepods-burstable-podc39ff2a7_1c21_4a11_9f19_4525719a9d74.slice. May 13 23:58:39.332739 systemd[1]: Started cri-containerd-727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd.scope - libcontainer container 727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd. May 13 23:58:39.333334 kubelet[3303]: I0513 23:58:39.332924 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8188e27-983c-42f9-9285-eb0e4650dfd5-config-volume\") pod \"coredns-7db6d8ff4d-567d8\" (UID: \"c8188e27-983c-42f9-9285-eb0e4650dfd5\") " pod="kube-system/coredns-7db6d8ff4d-567d8" May 13 23:58:39.334050 kubelet[3303]: I0513 23:58:39.333508 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kmz7\" (UniqueName: \"kubernetes.io/projected/c8188e27-983c-42f9-9285-eb0e4650dfd5-kube-api-access-4kmz7\") pod \"coredns-7db6d8ff4d-567d8\" (UID: \"c8188e27-983c-42f9-9285-eb0e4650dfd5\") " pod="kube-system/coredns-7db6d8ff4d-567d8" May 13 23:58:39.334802 kubelet[3303]: I0513 23:58:39.334772 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c39ff2a7-1c21-4a11-9f19-4525719a9d74-config-volume\") pod \"coredns-7db6d8ff4d-gmj8m\" (UID: \"c39ff2a7-1c21-4a11-9f19-4525719a9d74\") " pod="kube-system/coredns-7db6d8ff4d-gmj8m" May 13 23:58:39.335000 kubelet[3303]: I0513 23:58:39.334813 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts9tw\" (UniqueName: \"kubernetes.io/projected/c39ff2a7-1c21-4a11-9f19-4525719a9d74-kube-api-access-ts9tw\") pod \"coredns-7db6d8ff4d-gmj8m\" (UID: \"c39ff2a7-1c21-4a11-9f19-4525719a9d74\") " pod="kube-system/coredns-7db6d8ff4d-gmj8m" May 13 23:58:39.400191 containerd[1904]: time="2025-05-13T23:58:39.400123838Z" level=info msg="StartContainer for \"727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd\" returns successfully" May 13 23:58:39.594108 containerd[1904]: time="2025-05-13T23:58:39.593985567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-567d8,Uid:c8188e27-983c-42f9-9285-eb0e4650dfd5,Namespace:kube-system,Attempt:0,}" May 13 23:58:39.606122 containerd[1904]: time="2025-05-13T23:58:39.606072445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gmj8m,Uid:c39ff2a7-1c21-4a11-9f19-4525719a9d74,Namespace:kube-system,Attempt:0,}" May 13 23:58:39.932032 kubelet[3303]: I0513 23:58:39.931929 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-plzvz" podStartSLOduration=1.738257452 podStartE2EDuration="17.931907994s" podCreationTimestamp="2025-05-13 23:58:22 +0000 UTC" firstStartedPulling="2025-05-13 23:58:22.981291506 +0000 UTC m=+14.524336308" lastFinishedPulling="2025-05-13 23:58:39.174942037 +0000 UTC m=+30.717986850" observedRunningTime="2025-05-13 23:58:39.856068969 +0000 UTC m=+31.399113789" watchObservedRunningTime="2025-05-13 23:58:39.931907994 +0000 UTC m=+31.474952816" May 13 23:58:40.261654 systemd[1]: Started sshd@9-172.31.16.70:22-147.75.109.163:55704.service - OpenSSH per-connection server daemon (147.75.109.163:55704). May 13 23:58:40.522997 sshd[4317]: Accepted publickey for core from 147.75.109.163 port 55704 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:58:40.527035 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:40.539083 systemd-logind[1892]: New session 10 of user core. May 13 23:58:40.557770 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 23:58:41.560321 sshd[4320]: Connection closed by 147.75.109.163 port 55704 May 13 23:58:41.561666 sshd-session[4317]: pam_unix(sshd:session): session closed for user core May 13 23:58:41.575982 systemd[1]: sshd@9-172.31.16.70:22-147.75.109.163:55704.service: Deactivated successfully. May 13 23:58:41.576558 systemd-logind[1892]: Session 10 logged out. Waiting for processes to exit. May 13 23:58:41.579839 systemd[1]: session-10.scope: Deactivated successfully. May 13 23:58:41.581726 systemd-logind[1892]: Removed session 10. May 13 23:58:43.996375 systemd-networkd[1825]: cilium_host: Link UP May 13 23:58:43.996498 systemd-networkd[1825]: cilium_net: Link UP May 13 23:58:43.996637 systemd-networkd[1825]: cilium_net: Gained carrier May 13 23:58:43.996780 systemd-networkd[1825]: cilium_host: Gained carrier May 13 23:58:43.999791 (udev-worker)[4335]: Network interface NamePolicy= disabled on kernel command line. May 13 23:58:44.000933 (udev-worker)[4337]: Network interface NamePolicy= disabled on kernel command line. May 13 23:58:44.123852 systemd-networkd[1825]: cilium_vxlan: Link UP May 13 23:58:44.123866 systemd-networkd[1825]: cilium_vxlan: Gained carrier May 13 23:58:44.289419 systemd-networkd[1825]: cilium_net: Gained IPv6LL May 13 23:58:44.561425 systemd-networkd[1825]: cilium_host: Gained IPv6LL May 13 23:58:44.692421 kernel: NET: Registered PF_ALG protocol family May 13 23:58:45.398074 systemd-networkd[1825]: lxc_health: Link UP May 13 23:58:45.405694 (udev-worker)[4348]: Network interface NamePolicy= disabled on kernel command line. May 13 23:58:45.407114 systemd-networkd[1825]: lxc_health: Gained carrier May 13 23:58:45.746904 kernel: eth0: renamed from tmpdbad6 May 13 23:58:45.746581 systemd-networkd[1825]: lxc9956a24bcb93: Link UP May 13 23:58:45.750918 systemd-networkd[1825]: lxc9956a24bcb93: Gained carrier May 13 23:58:45.776667 systemd-networkd[1825]: lxcddf1a32fe57d: Link UP May 13 23:58:45.791909 kernel: eth0: renamed from tmp70624 May 13 23:58:45.795117 systemd-networkd[1825]: lxcddf1a32fe57d: Gained carrier May 13 23:58:45.970914 systemd-networkd[1825]: cilium_vxlan: Gained IPv6LL May 13 23:58:46.548317 systemd-networkd[1825]: lxc_health: Gained IPv6LL May 13 23:58:46.600441 systemd[1]: Started sshd@10-172.31.16.70:22-147.75.109.163:55712.service - OpenSSH per-connection server daemon (147.75.109.163:55712). May 13 23:58:46.793315 kubelet[3303]: I0513 23:58:46.790196 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fpppb" podStartSLOduration=12.819800027 podStartE2EDuration="24.775592384s" podCreationTimestamp="2025-05-13 23:58:22 +0000 UTC" firstStartedPulling="2025-05-13 23:58:22.931813865 +0000 UTC m=+14.474858666" lastFinishedPulling="2025-05-13 23:58:34.8876062 +0000 UTC m=+26.430651023" observedRunningTime="2025-05-13 23:58:39.935158515 +0000 UTC m=+31.478203337" watchObservedRunningTime="2025-05-13 23:58:46.775592384 +0000 UTC m=+38.318637205" May 13 23:58:46.849395 sshd[4695]: Accepted publickey for core from 147.75.109.163 port 55712 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:58:46.852877 sshd-session[4695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:46.863332 systemd-logind[1892]: New session 11 of user core. May 13 23:58:46.870412 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 23:58:47.057489 systemd-networkd[1825]: lxcddf1a32fe57d: Gained IPv6LL May 13 23:58:47.121504 systemd-networkd[1825]: lxc9956a24bcb93: Gained IPv6LL May 13 23:58:47.212505 sshd[4697]: Connection closed by 147.75.109.163 port 55712 May 13 23:58:47.214241 sshd-session[4695]: pam_unix(sshd:session): session closed for user core May 13 23:58:47.218623 systemd-logind[1892]: Session 11 logged out. Waiting for processes to exit. May 13 23:58:47.221768 systemd[1]: sshd@10-172.31.16.70:22-147.75.109.163:55712.service: Deactivated successfully. May 13 23:58:47.225389 systemd[1]: session-11.scope: Deactivated successfully. May 13 23:58:47.227539 systemd-logind[1892]: Removed session 11. May 13 23:58:49.893153 ntpd[1887]: Listen normally on 8 cilium_host 192.168.0.195:123 May 13 23:58:49.893644 ntpd[1887]: 13 May 23:58:49 ntpd[1887]: Listen normally on 8 cilium_host 192.168.0.195:123 May 13 23:58:49.893644 ntpd[1887]: 13 May 23:58:49 ntpd[1887]: Listen normally on 9 cilium_net [fe80::70ac:63ff:fece:3a16%4]:123 May 13 23:58:49.893644 ntpd[1887]: 13 May 23:58:49 ntpd[1887]: Listen normally on 10 cilium_host [fe80::540a:66ff:fee5:7cd%5]:123 May 13 23:58:49.893644 ntpd[1887]: 13 May 23:58:49 ntpd[1887]: Listen normally on 11 cilium_vxlan [fe80::b4e1:9ff:febb:34d0%6]:123 May 13 23:58:49.893644 ntpd[1887]: 13 May 23:58:49 ntpd[1887]: Listen normally on 12 lxc_health [fe80::3cf4:58ff:fe62:f856%8]:123 May 13 23:58:49.893644 ntpd[1887]: 13 May 23:58:49 ntpd[1887]: Listen normally on 13 lxc9956a24bcb93 [fe80::3426:9fff:fefa:7689%10]:123 May 13 23:58:49.893644 ntpd[1887]: 13 May 23:58:49 ntpd[1887]: Listen normally on 14 lxcddf1a32fe57d [fe80::d431:7aff:fec7:7004%12]:123 May 13 23:58:49.893265 ntpd[1887]: Listen normally on 9 cilium_net [fe80::70ac:63ff:fece:3a16%4]:123 May 13 23:58:49.893326 ntpd[1887]: Listen normally on 10 cilium_host [fe80::540a:66ff:fee5:7cd%5]:123 May 13 23:58:49.893365 ntpd[1887]: Listen normally on 11 cilium_vxlan [fe80::b4e1:9ff:febb:34d0%6]:123 May 13 23:58:49.893404 ntpd[1887]: Listen normally on 12 lxc_health [fe80::3cf4:58ff:fe62:f856%8]:123 May 13 23:58:49.893446 ntpd[1887]: Listen normally on 13 lxc9956a24bcb93 [fe80::3426:9fff:fefa:7689%10]:123 May 13 23:58:49.893484 ntpd[1887]: Listen normally on 14 lxcddf1a32fe57d [fe80::d431:7aff:fec7:7004%12]:123 May 13 23:58:50.600387 containerd[1904]: time="2025-05-13T23:58:50.600277140Z" level=info msg="connecting to shim 706249818fe2f45202231e17228c302dfb66816d5531596c4a02306166fb7367" address="unix:///run/containerd/s/a621a981201ae67dbb2cf9e1004014939389f3d7367fb554365f3037027e1bfe" namespace=k8s.io protocol=ttrpc version=3 May 13 23:58:50.611962 containerd[1904]: time="2025-05-13T23:58:50.611276156Z" level=info msg="connecting to shim dbad6cdd265e0d7b835608a7079ff9e77e848f5d88806dec41cdfef0361131ed" address="unix:///run/containerd/s/d7a6f2b4536456731c3ba0752ea401191e012b3534f525e710a5960b9d517a06" namespace=k8s.io protocol=ttrpc version=3 May 13 23:58:50.673511 systemd[1]: Started cri-containerd-706249818fe2f45202231e17228c302dfb66816d5531596c4a02306166fb7367.scope - libcontainer container 706249818fe2f45202231e17228c302dfb66816d5531596c4a02306166fb7367. May 13 23:58:50.688469 systemd[1]: Started cri-containerd-dbad6cdd265e0d7b835608a7079ff9e77e848f5d88806dec41cdfef0361131ed.scope - libcontainer container dbad6cdd265e0d7b835608a7079ff9e77e848f5d88806dec41cdfef0361131ed. May 13 23:58:50.808606 containerd[1904]: time="2025-05-13T23:58:50.808541393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-567d8,Uid:c8188e27-983c-42f9-9285-eb0e4650dfd5,Namespace:kube-system,Attempt:0,} returns sandbox id \"706249818fe2f45202231e17228c302dfb66816d5531596c4a02306166fb7367\"" May 13 23:58:50.821541 containerd[1904]: time="2025-05-13T23:58:50.821420598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gmj8m,Uid:c39ff2a7-1c21-4a11-9f19-4525719a9d74,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbad6cdd265e0d7b835608a7079ff9e77e848f5d88806dec41cdfef0361131ed\"" May 13 23:58:50.829155 containerd[1904]: time="2025-05-13T23:58:50.829119516Z" level=info msg="CreateContainer within sandbox \"dbad6cdd265e0d7b835608a7079ff9e77e848f5d88806dec41cdfef0361131ed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:58:50.831507 containerd[1904]: time="2025-05-13T23:58:50.831399250Z" level=info msg="CreateContainer within sandbox \"706249818fe2f45202231e17228c302dfb66816d5531596c4a02306166fb7367\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:58:50.860985 containerd[1904]: time="2025-05-13T23:58:50.860347536Z" level=info msg="Container c684a7139678422c81df13b7614975f4d67f4aecaed4072fb92d421b64559065: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:50.860985 containerd[1904]: time="2025-05-13T23:58:50.860390544Z" level=info msg="Container 655d7bc4774568dd2d173e47163a7978b5654caa7865df87fdf1f3dc43a25f54: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:50.874904 containerd[1904]: time="2025-05-13T23:58:50.874856451Z" level=info msg="CreateContainer within sandbox \"dbad6cdd265e0d7b835608a7079ff9e77e848f5d88806dec41cdfef0361131ed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c684a7139678422c81df13b7614975f4d67f4aecaed4072fb92d421b64559065\"" May 13 23:58:50.875714 containerd[1904]: time="2025-05-13T23:58:50.875676942Z" level=info msg="StartContainer for \"c684a7139678422c81df13b7614975f4d67f4aecaed4072fb92d421b64559065\"" May 13 23:58:50.876895 containerd[1904]: time="2025-05-13T23:58:50.876577143Z" level=info msg="connecting to shim c684a7139678422c81df13b7614975f4d67f4aecaed4072fb92d421b64559065" address="unix:///run/containerd/s/d7a6f2b4536456731c3ba0752ea401191e012b3534f525e710a5960b9d517a06" protocol=ttrpc version=3 May 13 23:58:50.880268 containerd[1904]: time="2025-05-13T23:58:50.879566589Z" level=info msg="CreateContainer within sandbox \"706249818fe2f45202231e17228c302dfb66816d5531596c4a02306166fb7367\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"655d7bc4774568dd2d173e47163a7978b5654caa7865df87fdf1f3dc43a25f54\"" May 13 23:58:50.881481 containerd[1904]: time="2025-05-13T23:58:50.881453026Z" level=info msg="StartContainer for \"655d7bc4774568dd2d173e47163a7978b5654caa7865df87fdf1f3dc43a25f54\"" May 13 23:58:50.883718 containerd[1904]: time="2025-05-13T23:58:50.883568091Z" level=info msg="connecting to shim 655d7bc4774568dd2d173e47163a7978b5654caa7865df87fdf1f3dc43a25f54" address="unix:///run/containerd/s/a621a981201ae67dbb2cf9e1004014939389f3d7367fb554365f3037027e1bfe" protocol=ttrpc version=3 May 13 23:58:50.910991 systemd[1]: Started cri-containerd-c684a7139678422c81df13b7614975f4d67f4aecaed4072fb92d421b64559065.scope - libcontainer container c684a7139678422c81df13b7614975f4d67f4aecaed4072fb92d421b64559065. May 13 23:58:50.922428 systemd[1]: Started cri-containerd-655d7bc4774568dd2d173e47163a7978b5654caa7865df87fdf1f3dc43a25f54.scope - libcontainer container 655d7bc4774568dd2d173e47163a7978b5654caa7865df87fdf1f3dc43a25f54. May 13 23:58:50.984258 containerd[1904]: time="2025-05-13T23:58:50.983374740Z" level=info msg="StartContainer for \"655d7bc4774568dd2d173e47163a7978b5654caa7865df87fdf1f3dc43a25f54\" returns successfully" May 13 23:58:50.984817 containerd[1904]: time="2025-05-13T23:58:50.984665886Z" level=info msg="StartContainer for \"c684a7139678422c81df13b7614975f4d67f4aecaed4072fb92d421b64559065\" returns successfully" May 13 23:58:51.512546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4171384690.mount: Deactivated successfully. May 13 23:58:51.919482 kubelet[3303]: I0513 23:58:51.918938 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-567d8" podStartSLOduration=29.918921228 podStartE2EDuration="29.918921228s" podCreationTimestamp="2025-05-13 23:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:58:51.917303612 +0000 UTC m=+43.460348431" watchObservedRunningTime="2025-05-13 23:58:51.918921228 +0000 UTC m=+43.461966047" May 13 23:58:51.931574 kubelet[3303]: I0513 23:58:51.931509 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gmj8m" podStartSLOduration=29.931461742 podStartE2EDuration="29.931461742s" podCreationTimestamp="2025-05-13 23:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:58:51.930606397 +0000 UTC m=+43.473651223" watchObservedRunningTime="2025-05-13 23:58:51.931461742 +0000 UTC m=+43.474506609" May 13 23:58:52.245586 systemd[1]: Started sshd@11-172.31.16.70:22-147.75.109.163:57012.service - OpenSSH per-connection server daemon (147.75.109.163:57012). May 13 23:58:52.460438 sshd[4891]: Accepted publickey for core from 147.75.109.163 port 57012 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:58:52.464460 sshd-session[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:52.473518 systemd-logind[1892]: New session 12 of user core. May 13 23:58:52.481509 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 23:58:52.749676 sshd[4893]: Connection closed by 147.75.109.163 port 57012 May 13 23:58:52.750619 sshd-session[4891]: pam_unix(sshd:session): session closed for user core May 13 23:58:52.754962 systemd[1]: sshd@11-172.31.16.70:22-147.75.109.163:57012.service: Deactivated successfully. May 13 23:58:52.757288 systemd[1]: session-12.scope: Deactivated successfully. May 13 23:58:52.758149 systemd-logind[1892]: Session 12 logged out. Waiting for processes to exit. May 13 23:58:52.760887 systemd-logind[1892]: Removed session 12. May 13 23:58:57.783294 systemd[1]: Started sshd@12-172.31.16.70:22-147.75.109.163:57028.service - OpenSSH per-connection server daemon (147.75.109.163:57028). May 13 23:58:57.954785 sshd[4911]: Accepted publickey for core from 147.75.109.163 port 57028 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:58:57.956304 sshd-session[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:57.961336 systemd-logind[1892]: New session 13 of user core. May 13 23:58:57.969420 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 23:58:58.170449 sshd[4913]: Connection closed by 147.75.109.163 port 57028 May 13 23:58:58.171346 sshd-session[4911]: pam_unix(sshd:session): session closed for user core May 13 23:58:58.176150 systemd[1]: sshd@12-172.31.16.70:22-147.75.109.163:57028.service: Deactivated successfully. May 13 23:58:58.178405 systemd[1]: session-13.scope: Deactivated successfully. May 13 23:58:58.179460 systemd-logind[1892]: Session 13 logged out. Waiting for processes to exit. May 13 23:58:58.180737 systemd-logind[1892]: Removed session 13. May 13 23:58:58.203955 systemd[1]: Started sshd@13-172.31.16.70:22-147.75.109.163:49614.service - OpenSSH per-connection server daemon (147.75.109.163:49614). May 13 23:58:58.391929 sshd[4926]: Accepted publickey for core from 147.75.109.163 port 49614 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:58:58.393430 sshd-session[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:58.398277 systemd-logind[1892]: New session 14 of user core. May 13 23:58:58.404420 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 23:58:58.712563 sshd[4928]: Connection closed by 147.75.109.163 port 49614 May 13 23:58:58.714337 sshd-session[4926]: pam_unix(sshd:session): session closed for user core May 13 23:58:58.721465 systemd[1]: sshd@13-172.31.16.70:22-147.75.109.163:49614.service: Deactivated successfully. May 13 23:58:58.726371 systemd[1]: session-14.scope: Deactivated successfully. May 13 23:58:58.727502 systemd-logind[1892]: Session 14 logged out. Waiting for processes to exit. May 13 23:58:58.728709 systemd-logind[1892]: Removed session 14. May 13 23:58:58.750263 systemd[1]: Started sshd@14-172.31.16.70:22-147.75.109.163:49630.service - OpenSSH per-connection server daemon (147.75.109.163:49630). May 13 23:58:58.920106 sshd[4938]: Accepted publickey for core from 147.75.109.163 port 49630 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:58:58.922017 sshd-session[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:58.931009 systemd-logind[1892]: New session 15 of user core. May 13 23:58:58.941477 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 23:58:59.181990 sshd[4943]: Connection closed by 147.75.109.163 port 49630 May 13 23:58:59.182595 sshd-session[4938]: pam_unix(sshd:session): session closed for user core May 13 23:58:59.186185 systemd[1]: sshd@14-172.31.16.70:22-147.75.109.163:49630.service: Deactivated successfully. May 13 23:58:59.188248 systemd[1]: session-15.scope: Deactivated successfully. May 13 23:58:59.189157 systemd-logind[1892]: Session 15 logged out. Waiting for processes to exit. May 13 23:58:59.190695 systemd-logind[1892]: Removed session 15. May 13 23:59:04.215613 systemd[1]: Started sshd@15-172.31.16.70:22-147.75.109.163:49638.service - OpenSSH per-connection server daemon (147.75.109.163:49638). May 13 23:59:04.388873 sshd[4957]: Accepted publickey for core from 147.75.109.163 port 49638 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:04.390305 sshd-session[4957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:04.403294 systemd-logind[1892]: New session 16 of user core. May 13 23:59:04.412490 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 23:59:04.602634 sshd[4959]: Connection closed by 147.75.109.163 port 49638 May 13 23:59:04.603492 sshd-session[4957]: pam_unix(sshd:session): session closed for user core May 13 23:59:04.606766 systemd[1]: sshd@15-172.31.16.70:22-147.75.109.163:49638.service: Deactivated successfully. May 13 23:59:04.608914 systemd[1]: session-16.scope: Deactivated successfully. May 13 23:59:04.610722 systemd-logind[1892]: Session 16 logged out. Waiting for processes to exit. May 13 23:59:04.611948 systemd-logind[1892]: Removed session 16. May 13 23:59:09.640455 systemd[1]: Started sshd@16-172.31.16.70:22-147.75.109.163:60404.service - OpenSSH per-connection server daemon (147.75.109.163:60404). May 13 23:59:09.807056 sshd[4973]: Accepted publickey for core from 147.75.109.163 port 60404 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:09.808778 sshd-session[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:09.813617 systemd-logind[1892]: New session 17 of user core. May 13 23:59:09.820401 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 23:59:10.016522 sshd[4975]: Connection closed by 147.75.109.163 port 60404 May 13 23:59:10.017343 sshd-session[4973]: pam_unix(sshd:session): session closed for user core May 13 23:59:10.021306 systemd[1]: sshd@16-172.31.16.70:22-147.75.109.163:60404.service: Deactivated successfully. May 13 23:59:10.023184 systemd[1]: session-17.scope: Deactivated successfully. May 13 23:59:10.024272 systemd-logind[1892]: Session 17 logged out. Waiting for processes to exit. May 13 23:59:10.025169 systemd-logind[1892]: Removed session 17. May 13 23:59:10.052457 systemd[1]: Started sshd@17-172.31.16.70:22-147.75.109.163:60408.service - OpenSSH per-connection server daemon (147.75.109.163:60408). May 13 23:59:10.225665 sshd[4987]: Accepted publickey for core from 147.75.109.163 port 60408 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:10.227190 sshd-session[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:10.232517 systemd-logind[1892]: New session 18 of user core. May 13 23:59:10.238417 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 23:59:10.867899 sshd[4989]: Connection closed by 147.75.109.163 port 60408 May 13 23:59:10.869006 sshd-session[4987]: pam_unix(sshd:session): session closed for user core May 13 23:59:10.878364 systemd[1]: sshd@17-172.31.16.70:22-147.75.109.163:60408.service: Deactivated successfully. May 13 23:59:10.880360 systemd[1]: session-18.scope: Deactivated successfully. May 13 23:59:10.881799 systemd-logind[1892]: Session 18 logged out. Waiting for processes to exit. May 13 23:59:10.883066 systemd-logind[1892]: Removed session 18. May 13 23:59:10.901377 systemd[1]: Started sshd@18-172.31.16.70:22-147.75.109.163:60416.service - OpenSSH per-connection server daemon (147.75.109.163:60416). May 13 23:59:11.090885 sshd[5000]: Accepted publickey for core from 147.75.109.163 port 60416 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:11.092440 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:11.096985 systemd-logind[1892]: New session 19 of user core. May 13 23:59:11.100408 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 23:59:13.033870 sshd[5002]: Connection closed by 147.75.109.163 port 60416 May 13 23:59:13.032790 sshd-session[5000]: pam_unix(sshd:session): session closed for user core May 13 23:59:13.039469 systemd-logind[1892]: Session 19 logged out. Waiting for processes to exit. May 13 23:59:13.040607 systemd[1]: sshd@18-172.31.16.70:22-147.75.109.163:60416.service: Deactivated successfully. May 13 23:59:13.045363 systemd[1]: session-19.scope: Deactivated successfully. May 13 23:59:13.046361 systemd-logind[1892]: Removed session 19. May 13 23:59:13.062747 systemd[1]: Started sshd@19-172.31.16.70:22-147.75.109.163:60428.service - OpenSSH per-connection server daemon (147.75.109.163:60428). May 13 23:59:13.234253 sshd[5019]: Accepted publickey for core from 147.75.109.163 port 60428 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:13.235775 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:13.240667 systemd-logind[1892]: New session 20 of user core. May 13 23:59:13.246402 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 23:59:13.742570 sshd[5021]: Connection closed by 147.75.109.163 port 60428 May 13 23:59:13.743533 sshd-session[5019]: pam_unix(sshd:session): session closed for user core May 13 23:59:13.748175 systemd-logind[1892]: Session 20 logged out. Waiting for processes to exit. May 13 23:59:13.749331 systemd[1]: sshd@19-172.31.16.70:22-147.75.109.163:60428.service: Deactivated successfully. May 13 23:59:13.751682 systemd[1]: session-20.scope: Deactivated successfully. May 13 23:59:13.753492 systemd-logind[1892]: Removed session 20. May 13 23:59:13.776447 systemd[1]: Started sshd@20-172.31.16.70:22-147.75.109.163:60430.service - OpenSSH per-connection server daemon (147.75.109.163:60430). May 13 23:59:13.948907 sshd[5031]: Accepted publickey for core from 147.75.109.163 port 60430 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:13.950365 sshd-session[5031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:13.955626 systemd-logind[1892]: New session 21 of user core. May 13 23:59:13.963441 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 23:59:14.160014 sshd[5033]: Connection closed by 147.75.109.163 port 60430 May 13 23:59:14.160828 sshd-session[5031]: pam_unix(sshd:session): session closed for user core May 13 23:59:14.165313 systemd[1]: sshd@20-172.31.16.70:22-147.75.109.163:60430.service: Deactivated successfully. May 13 23:59:14.167811 systemd[1]: session-21.scope: Deactivated successfully. May 13 23:59:14.168925 systemd-logind[1892]: Session 21 logged out. Waiting for processes to exit. May 13 23:59:14.170374 systemd-logind[1892]: Removed session 21. May 13 23:59:19.192513 systemd[1]: Started sshd@21-172.31.16.70:22-147.75.109.163:57294.service - OpenSSH per-connection server daemon (147.75.109.163:57294). May 13 23:59:19.358990 sshd[5048]: Accepted publickey for core from 147.75.109.163 port 57294 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:19.359598 sshd-session[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:19.363927 systemd-logind[1892]: New session 22 of user core. May 13 23:59:19.371429 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 23:59:19.552976 sshd[5050]: Connection closed by 147.75.109.163 port 57294 May 13 23:59:19.553935 sshd-session[5048]: pam_unix(sshd:session): session closed for user core May 13 23:59:19.557529 systemd[1]: sshd@21-172.31.16.70:22-147.75.109.163:57294.service: Deactivated successfully. May 13 23:59:19.559753 systemd[1]: session-22.scope: Deactivated successfully. May 13 23:59:19.560934 systemd-logind[1892]: Session 22 logged out. Waiting for processes to exit. May 13 23:59:19.561903 systemd-logind[1892]: Removed session 22. May 13 23:59:24.585448 systemd[1]: Started sshd@22-172.31.16.70:22-147.75.109.163:57302.service - OpenSSH per-connection server daemon (147.75.109.163:57302). May 13 23:59:24.750645 sshd[5065]: Accepted publickey for core from 147.75.109.163 port 57302 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:24.752347 sshd-session[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:24.758857 systemd-logind[1892]: New session 23 of user core. May 13 23:59:24.763503 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 23:59:24.943153 sshd[5067]: Connection closed by 147.75.109.163 port 57302 May 13 23:59:24.944739 sshd-session[5065]: pam_unix(sshd:session): session closed for user core May 13 23:59:24.948418 systemd[1]: sshd@22-172.31.16.70:22-147.75.109.163:57302.service: Deactivated successfully. May 13 23:59:24.950420 systemd[1]: session-23.scope: Deactivated successfully. May 13 23:59:24.951120 systemd-logind[1892]: Session 23 logged out. Waiting for processes to exit. May 13 23:59:24.952243 systemd-logind[1892]: Removed session 23. May 13 23:59:29.977486 systemd[1]: Started sshd@23-172.31.16.70:22-147.75.109.163:56060.service - OpenSSH per-connection server daemon (147.75.109.163:56060). May 13 23:59:30.152262 sshd[5079]: Accepted publickey for core from 147.75.109.163 port 56060 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:30.153763 sshd-session[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:30.161273 systemd-logind[1892]: New session 24 of user core. May 13 23:59:30.164384 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 23:59:30.350596 sshd[5081]: Connection closed by 147.75.109.163 port 56060 May 13 23:59:30.351423 sshd-session[5079]: pam_unix(sshd:session): session closed for user core May 13 23:59:30.354304 systemd[1]: sshd@23-172.31.16.70:22-147.75.109.163:56060.service: Deactivated successfully. May 13 23:59:30.357681 systemd[1]: session-24.scope: Deactivated successfully. May 13 23:59:30.358629 systemd-logind[1892]: Session 24 logged out. Waiting for processes to exit. May 13 23:59:30.359806 systemd-logind[1892]: Removed session 24. May 13 23:59:30.379872 systemd[1]: Started sshd@24-172.31.16.70:22-147.75.109.163:56076.service - OpenSSH per-connection server daemon (147.75.109.163:56076). May 13 23:59:30.544149 sshd[5092]: Accepted publickey for core from 147.75.109.163 port 56076 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:30.545617 sshd-session[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:30.558742 systemd-logind[1892]: New session 25 of user core. May 13 23:59:30.567532 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 23:59:32.062536 containerd[1904]: time="2025-05-13T23:59:32.062416525Z" level=info msg="StopContainer for \"727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd\" with timeout 30 (s)" May 13 23:59:32.063960 containerd[1904]: time="2025-05-13T23:59:32.063188302Z" level=info msg="Stop container \"727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd\" with signal terminated" May 13 23:59:32.077390 systemd[1]: cri-containerd-727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd.scope: Deactivated successfully. May 13 23:59:32.082342 containerd[1904]: time="2025-05-13T23:59:32.082298625Z" level=info msg="received exit event container_id:\"727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd\" id:\"727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd\" pid:4239 exited_at:{seconds:1747180772 nanos:81155837}" May 13 23:59:32.082572 containerd[1904]: time="2025-05-13T23:59:32.082546707Z" level=info msg="TaskExit event in podsandbox handler container_id:\"727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd\" id:\"727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd\" pid:4239 exited_at:{seconds:1747180772 nanos:81155837}" May 13 23:59:32.117469 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd-rootfs.mount: Deactivated successfully. May 13 23:59:32.123370 containerd[1904]: time="2025-05-13T23:59:32.123327042Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:59:32.128674 containerd[1904]: time="2025-05-13T23:59:32.128532784Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c\" id:\"1991c8fa8ae8a25ee3aaa75e1f4fbc0df7f9d76eaec7fce8e8d6932478427ec7\" pid:5128 exited_at:{seconds:1747180772 nanos:128116678}" May 13 23:59:32.130778 containerd[1904]: time="2025-05-13T23:59:32.130719370Z" level=info msg="StopContainer for \"74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c\" with timeout 2 (s)" May 13 23:59:32.130971 containerd[1904]: time="2025-05-13T23:59:32.130952284Z" level=info msg="Stop container \"74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c\" with signal terminated" May 13 23:59:32.141895 systemd-networkd[1825]: lxc_health: Link DOWN May 13 23:59:32.141905 systemd-networkd[1825]: lxc_health: Lost carrier May 13 23:59:32.162906 systemd[1]: cri-containerd-74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c.scope: Deactivated successfully. May 13 23:59:32.163305 systemd[1]: cri-containerd-74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c.scope: Consumed 8.166s CPU time, 191.4M memory peak, 70.1M read from disk, 13.3M written to disk. May 13 23:59:32.166972 containerd[1904]: time="2025-05-13T23:59:32.166934033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c\" id:\"74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c\" pid:4158 exited_at:{seconds:1747180772 nanos:166614269}" May 13 23:59:32.167792 containerd[1904]: time="2025-05-13T23:59:32.167165995Z" level=info msg="received exit event container_id:\"74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c\" id:\"74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c\" pid:4158 exited_at:{seconds:1747180772 nanos:166614269}" May 13 23:59:32.185234 containerd[1904]: time="2025-05-13T23:59:32.185177864Z" level=info msg="StopContainer for \"727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd\" returns successfully" May 13 23:59:32.188364 containerd[1904]: time="2025-05-13T23:59:32.188193429Z" level=info msg="StopPodSandbox for \"3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd\"" May 13 23:59:32.196159 containerd[1904]: time="2025-05-13T23:59:32.195976595Z" level=info msg="Container to stop \"727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:59:32.209900 systemd[1]: cri-containerd-3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd.scope: Deactivated successfully. May 13 23:59:32.216948 containerd[1904]: time="2025-05-13T23:59:32.216911504Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd\" id:\"3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd\" pid:3750 exit_status:137 exited_at:{seconds:1747180772 nanos:215557892}" May 13 23:59:32.221906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c-rootfs.mount: Deactivated successfully. May 13 23:59:32.244091 containerd[1904]: time="2025-05-13T23:59:32.244049863Z" level=info msg="StopContainer for \"74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c\" returns successfully" May 13 23:59:32.245028 containerd[1904]: time="2025-05-13T23:59:32.244647419Z" level=info msg="StopPodSandbox for \"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\"" May 13 23:59:32.245028 containerd[1904]: time="2025-05-13T23:59:32.244732972Z" level=info msg="Container to stop \"e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:59:32.245028 containerd[1904]: time="2025-05-13T23:59:32.244754318Z" level=info msg="Container to stop \"74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:59:32.245028 containerd[1904]: time="2025-05-13T23:59:32.244769094Z" level=info msg="Container to stop \"3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:59:32.245028 containerd[1904]: time="2025-05-13T23:59:32.244783012Z" level=info msg="Container to stop \"b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:59:32.245028 containerd[1904]: time="2025-05-13T23:59:32.244796023Z" level=info msg="Container to stop \"472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:59:32.256567 systemd[1]: cri-containerd-f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3.scope: Deactivated successfully. May 13 23:59:32.265623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd-rootfs.mount: Deactivated successfully. May 13 23:59:32.282486 containerd[1904]: time="2025-05-13T23:59:32.282443435Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\" id:\"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\" pid:3749 exit_status:137 exited_at:{seconds:1747180772 nanos:256332551}" May 13 23:59:32.286078 containerd[1904]: time="2025-05-13T23:59:32.285685090Z" level=info msg="received exit event sandbox_id:\"3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd\" exit_status:137 exited_at:{seconds:1747180772 nanos:215557892}" May 13 23:59:32.287251 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd-shm.mount: Deactivated successfully. May 13 23:59:32.287690 containerd[1904]: time="2025-05-13T23:59:32.286433138Z" level=info msg="shim disconnected" id=3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd namespace=k8s.io May 13 23:59:32.288135 containerd[1904]: time="2025-05-13T23:59:32.287928385Z" level=warning msg="cleaning up after shim disconnected" id=3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd namespace=k8s.io May 13 23:59:32.288135 containerd[1904]: time="2025-05-13T23:59:32.287950909Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:59:32.289069 containerd[1904]: time="2025-05-13T23:59:32.286584404Z" level=info msg="TearDown network for sandbox \"3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd\" successfully" May 13 23:59:32.289069 containerd[1904]: time="2025-05-13T23:59:32.289046295Z" level=info msg="StopPodSandbox for \"3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd\" returns successfully" May 13 23:59:32.304683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3-rootfs.mount: Deactivated successfully. May 13 23:59:32.312888 containerd[1904]: time="2025-05-13T23:59:32.312712806Z" level=info msg="received exit event sandbox_id:\"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\" exit_status:137 exited_at:{seconds:1747180772 nanos:256332551}" May 13 23:59:32.316486 containerd[1904]: time="2025-05-13T23:59:32.316450795Z" level=info msg="TearDown network for sandbox \"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\" successfully" May 13 23:59:32.316486 containerd[1904]: time="2025-05-13T23:59:32.316482855Z" level=info msg="StopPodSandbox for \"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\" returns successfully" May 13 23:59:32.322255 containerd[1904]: time="2025-05-13T23:59:32.322175647Z" level=info msg="shim disconnected" id=f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3 namespace=k8s.io May 13 23:59:32.322376 containerd[1904]: time="2025-05-13T23:59:32.322344801Z" level=warning msg="cleaning up after shim disconnected" id=f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3 namespace=k8s.io May 13 23:59:32.322433 containerd[1904]: time="2025-05-13T23:59:32.322364893Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:59:32.420985 kubelet[3303]: I0513 23:59:32.420919 3303 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-lib-modules\") pod \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " May 13 23:59:32.420985 kubelet[3303]: I0513 23:59:32.420987 3303 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-hostproc\") pod \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " May 13 23:59:32.421579 kubelet[3303]: I0513 23:59:32.421023 3303 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swhs7\" (UniqueName: \"kubernetes.io/projected/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-kube-api-access-swhs7\") pod \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " May 13 23:59:32.421579 kubelet[3303]: I0513 23:59:32.421044 3303 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-host-proc-sys-kernel\") pod \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " May 13 23:59:32.421579 kubelet[3303]: I0513 23:59:32.421063 3303 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-cilium-cgroup\") pod \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " May 13 23:59:32.421579 kubelet[3303]: I0513 23:59:32.421088 3303 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6h48\" (UniqueName: \"kubernetes.io/projected/7caa7b57-cb4d-4a81-aecb-798953548156-kube-api-access-f6h48\") pod \"7caa7b57-cb4d-4a81-aecb-798953548156\" (UID: \"7caa7b57-cb4d-4a81-aecb-798953548156\") " May 13 23:59:32.421579 kubelet[3303]: I0513 23:59:32.421112 3303 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-hubble-tls\") pod \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " May 13 23:59:32.421579 kubelet[3303]: I0513 23:59:32.421138 3303 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-bpf-maps\") pod \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " May 13 23:59:32.421829 kubelet[3303]: I0513 23:59:32.421167 3303 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-cilium-config-path\") pod \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " May 13 23:59:32.421829 kubelet[3303]: I0513 23:59:32.421188 3303 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-host-proc-sys-net\") pod \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " May 13 23:59:32.421829 kubelet[3303]: I0513 23:59:32.421236 3303 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-cni-path\") pod \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " May 13 23:59:32.421829 kubelet[3303]: I0513 23:59:32.421266 3303 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7caa7b57-cb4d-4a81-aecb-798953548156-cilium-config-path\") pod \"7caa7b57-cb4d-4a81-aecb-798953548156\" (UID: \"7caa7b57-cb4d-4a81-aecb-798953548156\") " May 13 23:59:32.421829 kubelet[3303]: I0513 23:59:32.421288 3303 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-cilium-run\") pod \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " May 13 23:59:32.421829 kubelet[3303]: I0513 23:59:32.421312 3303 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-xtables-lock\") pod \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " May 13 23:59:32.423319 kubelet[3303]: I0513 23:59:32.421334 3303 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-etc-cni-netd\") pod \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " May 13 23:59:32.423319 kubelet[3303]: I0513 23:59:32.421365 3303 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-clustermesh-secrets\") pod \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\" (UID: \"a9241d94-e91e-459f-a5a5-a9f03cd3ed4d\") " May 13 23:59:32.425860 kubelet[3303]: I0513 23:59:32.424035 3303 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" (UID: "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:32.425976 kubelet[3303]: I0513 23:59:32.425904 3303 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" (UID: "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:32.425976 kubelet[3303]: I0513 23:59:32.425925 3303 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-hostproc" (OuterVolumeSpecName: "hostproc") pod "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" (UID: "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:32.434228 kubelet[3303]: I0513 23:59:32.433619 3303 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" (UID: "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 23:59:32.434228 kubelet[3303]: I0513 23:59:32.433682 3303 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" (UID: "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:32.434228 kubelet[3303]: I0513 23:59:32.433699 3303 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-cni-path" (OuterVolumeSpecName: "cni-path") pod "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" (UID: "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:32.435758 kubelet[3303]: I0513 23:59:32.435726 3303 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7caa7b57-cb4d-4a81-aecb-798953548156-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7caa7b57-cb4d-4a81-aecb-798953548156" (UID: "7caa7b57-cb4d-4a81-aecb-798953548156"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 23:59:32.435843 kubelet[3303]: I0513 23:59:32.435773 3303 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" (UID: "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:32.435843 kubelet[3303]: I0513 23:59:32.435792 3303 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" (UID: "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:32.435843 kubelet[3303]: I0513 23:59:32.435807 3303 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" (UID: "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:32.436393 kubelet[3303]: I0513 23:59:32.436367 3303 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" (UID: "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 23:59:32.436524 kubelet[3303]: I0513 23:59:32.436498 3303 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" (UID: "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:32.436885 kubelet[3303]: I0513 23:59:32.436518 3303 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" (UID: "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:59:32.438336 kubelet[3303]: I0513 23:59:32.438039 3303 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-kube-api-access-swhs7" (OuterVolumeSpecName: "kube-api-access-swhs7") pod "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" (UID: "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d"). InnerVolumeSpecName "kube-api-access-swhs7". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:59:32.439470 kubelet[3303]: I0513 23:59:32.439372 3303 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7caa7b57-cb4d-4a81-aecb-798953548156-kube-api-access-f6h48" (OuterVolumeSpecName: "kube-api-access-f6h48") pod "7caa7b57-cb4d-4a81-aecb-798953548156" (UID: "7caa7b57-cb4d-4a81-aecb-798953548156"). InnerVolumeSpecName "kube-api-access-f6h48". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:59:32.441038 kubelet[3303]: I0513 23:59:32.441004 3303 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" (UID: "a9241d94-e91e-459f-a5a5-a9f03cd3ed4d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:59:32.522399 kubelet[3303]: I0513 23:59:32.522359 3303 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-hostproc\") on node \"ip-172-31-16-70\" DevicePath \"\"" May 13 23:59:32.522399 kubelet[3303]: I0513 23:59:32.522393 3303 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-host-proc-sys-kernel\") on node \"ip-172-31-16-70\" DevicePath \"\"" May 13 23:59:32.522399 kubelet[3303]: I0513 23:59:32.522403 3303 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-swhs7\" (UniqueName: \"kubernetes.io/projected/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-kube-api-access-swhs7\") on node \"ip-172-31-16-70\" DevicePath \"\"" May 13 23:59:32.522399 kubelet[3303]: I0513 23:59:32.522414 3303 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-cilium-cgroup\") on node \"ip-172-31-16-70\" DevicePath \"\"" May 13 23:59:32.522626 kubelet[3303]: I0513 23:59:32.522425 3303 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-f6h48\" (UniqueName: \"kubernetes.io/projected/7caa7b57-cb4d-4a81-aecb-798953548156-kube-api-access-f6h48\") on node \"ip-172-31-16-70\" DevicePath \"\"" May 13 23:59:32.522626 kubelet[3303]: I0513 23:59:32.522433 3303 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-hubble-tls\") on node \"ip-172-31-16-70\" DevicePath \"\"" May 13 23:59:32.522626 kubelet[3303]: I0513 23:59:32.522442 3303 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-cilium-config-path\") on node \"ip-172-31-16-70\" DevicePath \"\"" May 13 23:59:32.522626 kubelet[3303]: I0513 23:59:32.522450 3303 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-host-proc-sys-net\") on node \"ip-172-31-16-70\" DevicePath \"\"" May 13 23:59:32.522626 kubelet[3303]: I0513 23:59:32.522457 3303 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-cni-path\") on node \"ip-172-31-16-70\" DevicePath \"\"" May 13 23:59:32.522626 kubelet[3303]: I0513 23:59:32.522465 3303 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-bpf-maps\") on node \"ip-172-31-16-70\" DevicePath \"\"" May 13 23:59:32.522626 kubelet[3303]: I0513 23:59:32.522474 3303 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7caa7b57-cb4d-4a81-aecb-798953548156-cilium-config-path\") on node \"ip-172-31-16-70\" DevicePath \"\"" May 13 23:59:32.522626 kubelet[3303]: I0513 23:59:32.522481 3303 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-cilium-run\") on node \"ip-172-31-16-70\" DevicePath \"\"" May 13 23:59:32.522822 kubelet[3303]: I0513 23:59:32.522491 3303 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-xtables-lock\") on node \"ip-172-31-16-70\" DevicePath \"\"" May 13 23:59:32.522822 kubelet[3303]: I0513 23:59:32.522498 3303 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-etc-cni-netd\") on node \"ip-172-31-16-70\" DevicePath \"\"" May 13 23:59:32.522822 kubelet[3303]: I0513 23:59:32.522505 3303 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-clustermesh-secrets\") on node \"ip-172-31-16-70\" DevicePath \"\"" May 13 23:59:32.522822 kubelet[3303]: I0513 23:59:32.522513 3303 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d-lib-modules\") on node \"ip-172-31-16-70\" DevicePath \"\"" May 13 23:59:32.640258 systemd[1]: Removed slice kubepods-burstable-poda9241d94_e91e_459f_a5a5_a9f03cd3ed4d.slice - libcontainer container kubepods-burstable-poda9241d94_e91e_459f_a5a5_a9f03cd3ed4d.slice. May 13 23:59:32.640418 systemd[1]: kubepods-burstable-poda9241d94_e91e_459f_a5a5_a9f03cd3ed4d.slice: Consumed 8.260s CPU time, 191.7M memory peak, 70.1M read from disk, 13.3M written to disk. May 13 23:59:32.642273 systemd[1]: Removed slice kubepods-besteffort-pod7caa7b57_cb4d_4a81_aecb_798953548156.slice - libcontainer container kubepods-besteffort-pod7caa7b57_cb4d_4a81_aecb_798953548156.slice. May 13 23:59:33.013568 kubelet[3303]: I0513 23:59:33.012934 3303 scope.go:117] "RemoveContainer" containerID="727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd" May 13 23:59:33.017725 containerd[1904]: time="2025-05-13T23:59:33.017520943Z" level=info msg="RemoveContainer for \"727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd\"" May 13 23:59:33.047547 containerd[1904]: time="2025-05-13T23:59:33.047504963Z" level=info msg="RemoveContainer for \"727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd\" returns successfully" May 13 23:59:33.047836 kubelet[3303]: I0513 23:59:33.047795 3303 scope.go:117] "RemoveContainer" containerID="727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd" May 13 23:59:33.064906 containerd[1904]: time="2025-05-13T23:59:33.048057463Z" level=error msg="ContainerStatus for \"727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd\": not found" May 13 23:59:33.066131 kubelet[3303]: E0513 23:59:33.066088 3303 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd\": not found" containerID="727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd" May 13 23:59:33.068183 kubelet[3303]: I0513 23:59:33.067968 3303 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd"} err="failed to get container status \"727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd\": rpc error: code = NotFound desc = an error occurred when try to find container \"727fa4fbe1084783daabc0d859ecfdb9c0e1c44049d171fce595bff55b3f7fdd\": not found" May 13 23:59:33.068183 kubelet[3303]: I0513 23:59:33.068078 3303 scope.go:117] "RemoveContainer" containerID="74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c" May 13 23:59:33.070503 containerd[1904]: time="2025-05-13T23:59:33.069985992Z" level=info msg="RemoveContainer for \"74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c\"" May 13 23:59:33.076090 containerd[1904]: time="2025-05-13T23:59:33.076053269Z" level=info msg="RemoveContainer for \"74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c\" returns successfully" May 13 23:59:33.076271 kubelet[3303]: I0513 23:59:33.076256 3303 scope.go:117] "RemoveContainer" containerID="472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c" May 13 23:59:33.077671 containerd[1904]: time="2025-05-13T23:59:33.077642778Z" level=info msg="RemoveContainer for \"472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c\"" May 13 23:59:33.083712 containerd[1904]: time="2025-05-13T23:59:33.083671738Z" level=info msg="RemoveContainer for \"472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c\" returns successfully" May 13 23:59:33.083943 kubelet[3303]: I0513 23:59:33.083861 3303 scope.go:117] "RemoveContainer" containerID="b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5" May 13 23:59:33.086015 containerd[1904]: time="2025-05-13T23:59:33.085986649Z" level=info msg="RemoveContainer for \"b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5\"" May 13 23:59:33.092060 containerd[1904]: time="2025-05-13T23:59:33.091996507Z" level=info msg="RemoveContainer for \"b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5\" returns successfully" May 13 23:59:33.092232 kubelet[3303]: I0513 23:59:33.092213 3303 scope.go:117] "RemoveContainer" containerID="3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7" May 13 23:59:33.094006 containerd[1904]: time="2025-05-13T23:59:33.093567070Z" level=info msg="RemoveContainer for \"3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7\"" May 13 23:59:33.099059 containerd[1904]: time="2025-05-13T23:59:33.099026895Z" level=info msg="RemoveContainer for \"3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7\" returns successfully" May 13 23:59:33.099296 kubelet[3303]: I0513 23:59:33.099226 3303 scope.go:117] "RemoveContainer" containerID="e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38" May 13 23:59:33.100765 containerd[1904]: time="2025-05-13T23:59:33.100735441Z" level=info msg="RemoveContainer for \"e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38\"" May 13 23:59:33.106004 containerd[1904]: time="2025-05-13T23:59:33.105972901Z" level=info msg="RemoveContainer for \"e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38\" returns successfully" May 13 23:59:33.106238 kubelet[3303]: I0513 23:59:33.106184 3303 scope.go:117] "RemoveContainer" containerID="74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c" May 13 23:59:33.106487 containerd[1904]: time="2025-05-13T23:59:33.106432894Z" level=error msg="ContainerStatus for \"74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c\": not found" May 13 23:59:33.106613 kubelet[3303]: E0513 23:59:33.106589 3303 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c\": not found" containerID="74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c" May 13 23:59:33.106654 kubelet[3303]: I0513 23:59:33.106620 3303 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c"} err="failed to get container status \"74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c\": rpc error: code = NotFound desc = an error occurred when try to find container \"74f446b39be7d0adb0abc14edb409d43781139f8ef2e6bb6c5e38e25819a283c\": not found" May 13 23:59:33.106654 kubelet[3303]: I0513 23:59:33.106642 3303 scope.go:117] "RemoveContainer" containerID="472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c" May 13 23:59:33.106854 containerd[1904]: time="2025-05-13T23:59:33.106800921Z" level=error msg="ContainerStatus for \"472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c\": not found" May 13 23:59:33.107006 kubelet[3303]: E0513 23:59:33.106983 3303 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c\": not found" containerID="472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c" May 13 23:59:33.107085 kubelet[3303]: I0513 23:59:33.107009 3303 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c"} err="failed to get container status \"472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c\": rpc error: code = NotFound desc = an error occurred when try to find container \"472356aeb9e7a0af72568090c5212b5b575a660d1e71f897a0e36d6977d10c3c\": not found" May 13 23:59:33.107085 kubelet[3303]: I0513 23:59:33.107039 3303 scope.go:117] "RemoveContainer" containerID="b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5" May 13 23:59:33.107269 containerd[1904]: time="2025-05-13T23:59:33.107239111Z" level=error msg="ContainerStatus for \"b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5\": not found" May 13 23:59:33.107514 kubelet[3303]: E0513 23:59:33.107364 3303 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5\": not found" containerID="b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5" May 13 23:59:33.107561 kubelet[3303]: I0513 23:59:33.107516 3303 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5"} err="failed to get container status \"b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"b2e1221da662a7d50c02c2bf0c7f16394b91fe9a71f3b090f60ce685e0bd72c5\": not found" May 13 23:59:33.107561 kubelet[3303]: I0513 23:59:33.107533 3303 scope.go:117] "RemoveContainer" containerID="3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7" May 13 23:59:33.107777 containerd[1904]: time="2025-05-13T23:59:33.107745482Z" level=error msg="ContainerStatus for \"3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7\": not found" May 13 23:59:33.107867 kubelet[3303]: E0513 23:59:33.107847 3303 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7\": not found" containerID="3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7" May 13 23:59:33.107915 kubelet[3303]: I0513 23:59:33.107869 3303 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7"} err="failed to get container status \"3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"3783ac5065845eb0890836ef869533bd8a17fe9d98379bfe7de76fc9cef059b7\": not found" May 13 23:59:33.107915 kubelet[3303]: I0513 23:59:33.107905 3303 scope.go:117] "RemoveContainer" containerID="e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38" May 13 23:59:33.108082 containerd[1904]: time="2025-05-13T23:59:33.108056998Z" level=error msg="ContainerStatus for \"e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38\": not found" May 13 23:59:33.108231 kubelet[3303]: E0513 23:59:33.108192 3303 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38\": not found" containerID="e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38" May 13 23:59:33.108231 kubelet[3303]: I0513 23:59:33.108225 3303 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38"} err="failed to get container status \"e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38\": rpc error: code = NotFound desc = an error occurred when try to find container \"e5da149a520305b19cbbc7794b2c2180f4277d1e1f69050970454ab89cde2a38\": not found" May 13 23:59:33.113750 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3-shm.mount: Deactivated successfully. May 13 23:59:33.113857 systemd[1]: var-lib-kubelet-pods-7caa7b57\x2dcb4d\x2d4a81\x2daecb\x2d798953548156-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df6h48.mount: Deactivated successfully. May 13 23:59:33.113928 systemd[1]: var-lib-kubelet-pods-a9241d94\x2de91e\x2d459f\x2da5a5\x2da9f03cd3ed4d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dswhs7.mount: Deactivated successfully. May 13 23:59:33.114004 systemd[1]: var-lib-kubelet-pods-a9241d94\x2de91e\x2d459f\x2da5a5\x2da9f03cd3ed4d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 23:59:33.114068 systemd[1]: var-lib-kubelet-pods-a9241d94\x2de91e\x2d459f\x2da5a5\x2da9f03cd3ed4d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 23:59:33.757611 kubelet[3303]: E0513 23:59:33.757555 3303 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 23:59:34.009926 sshd[5094]: Connection closed by 147.75.109.163 port 56076 May 13 23:59:34.010457 sshd-session[5092]: pam_unix(sshd:session): session closed for user core May 13 23:59:34.015099 systemd-logind[1892]: Session 25 logged out. Waiting for processes to exit. May 13 23:59:34.016105 systemd[1]: sshd@24-172.31.16.70:22-147.75.109.163:56076.service: Deactivated successfully. May 13 23:59:34.018667 systemd[1]: session-25.scope: Deactivated successfully. May 13 23:59:34.020073 systemd-logind[1892]: Removed session 25. May 13 23:59:34.040952 systemd[1]: Started sshd@25-172.31.16.70:22-147.75.109.163:56088.service - OpenSSH per-connection server daemon (147.75.109.163:56088). May 13 23:59:34.216653 sshd[5245]: Accepted publickey for core from 147.75.109.163 port 56088 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:34.218065 sshd-session[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:34.222706 systemd-logind[1892]: New session 26 of user core. May 13 23:59:34.227428 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 23:59:34.626216 kubelet[3303]: I0513 23:59:34.626130 3303 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7caa7b57-cb4d-4a81-aecb-798953548156" path="/var/lib/kubelet/pods/7caa7b57-cb4d-4a81-aecb-798953548156/volumes" May 13 23:59:34.627297 kubelet[3303]: I0513 23:59:34.627196 3303 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" path="/var/lib/kubelet/pods/a9241d94-e91e-459f-a5a5-a9f03cd3ed4d/volumes" May 13 23:59:34.893114 ntpd[1887]: Deleting interface #12 lxc_health, fe80::3cf4:58ff:fe62:f856%8#123, interface stats: received=0, sent=0, dropped=0, active_time=45 secs May 13 23:59:34.894301 ntpd[1887]: 13 May 23:59:34 ntpd[1887]: Deleting interface #12 lxc_health, fe80::3cf4:58ff:fe62:f856%8#123, interface stats: received=0, sent=0, dropped=0, active_time=45 secs May 13 23:59:34.894354 sshd[5247]: Connection closed by 147.75.109.163 port 56088 May 13 23:59:34.895676 sshd-session[5245]: pam_unix(sshd:session): session closed for user core May 13 23:59:34.897479 kubelet[3303]: I0513 23:59:34.895993 3303 topology_manager.go:215] "Topology Admit Handler" podUID="6430271f-6738-43d4-933d-e7883da6d877" podNamespace="kube-system" podName="cilium-792rl" May 13 23:59:34.901539 systemd[1]: sshd@25-172.31.16.70:22-147.75.109.163:56088.service: Deactivated successfully. May 13 23:59:34.907163 systemd[1]: session-26.scope: Deactivated successfully. May 13 23:59:34.909779 kubelet[3303]: E0513 23:59:34.909749 3303 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" containerName="cilium-agent" May 13 23:59:34.909887 kubelet[3303]: E0513 23:59:34.909792 3303 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" containerName="clean-cilium-state" May 13 23:59:34.909887 kubelet[3303]: E0513 23:59:34.909800 3303 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" containerName="apply-sysctl-overwrites" May 13 23:59:34.909887 kubelet[3303]: E0513 23:59:34.909806 3303 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" containerName="mount-bpf-fs" May 13 23:59:34.909887 kubelet[3303]: E0513 23:59:34.909818 3303 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7caa7b57-cb4d-4a81-aecb-798953548156" containerName="cilium-operator" May 13 23:59:34.909887 kubelet[3303]: E0513 23:59:34.909828 3303 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" containerName="mount-cgroup" May 13 23:59:34.911100 systemd-logind[1892]: Session 26 logged out. Waiting for processes to exit. May 13 23:59:34.913663 systemd-logind[1892]: Removed session 26. May 13 23:59:34.923267 kubelet[3303]: I0513 23:59:34.921194 3303 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9241d94-e91e-459f-a5a5-a9f03cd3ed4d" containerName="cilium-agent" May 13 23:59:34.923267 kubelet[3303]: I0513 23:59:34.923266 3303 memory_manager.go:354] "RemoveStaleState removing state" podUID="7caa7b57-cb4d-4a81-aecb-798953548156" containerName="cilium-operator" May 13 23:59:34.932133 systemd[1]: Started sshd@26-172.31.16.70:22-147.75.109.163:56096.service - OpenSSH per-connection server daemon (147.75.109.163:56096). May 13 23:59:34.953977 systemd[1]: Created slice kubepods-burstable-pod6430271f_6738_43d4_933d_e7883da6d877.slice - libcontainer container kubepods-burstable-pod6430271f_6738_43d4_933d_e7883da6d877.slice. May 13 23:59:35.046852 kubelet[3303]: I0513 23:59:35.046803 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6430271f-6738-43d4-933d-e7883da6d877-cilium-config-path\") pod \"cilium-792rl\" (UID: \"6430271f-6738-43d4-933d-e7883da6d877\") " pod="kube-system/cilium-792rl" May 13 23:59:35.047263 kubelet[3303]: I0513 23:59:35.046860 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6430271f-6738-43d4-933d-e7883da6d877-cilium-cgroup\") pod \"cilium-792rl\" (UID: \"6430271f-6738-43d4-933d-e7883da6d877\") " pod="kube-system/cilium-792rl" May 13 23:59:35.047263 kubelet[3303]: I0513 23:59:35.046881 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6430271f-6738-43d4-933d-e7883da6d877-etc-cni-netd\") pod \"cilium-792rl\" (UID: \"6430271f-6738-43d4-933d-e7883da6d877\") " pod="kube-system/cilium-792rl" May 13 23:59:35.047263 kubelet[3303]: I0513 23:59:35.046897 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6430271f-6738-43d4-933d-e7883da6d877-host-proc-sys-net\") pod \"cilium-792rl\" (UID: \"6430271f-6738-43d4-933d-e7883da6d877\") " pod="kube-system/cilium-792rl" May 13 23:59:35.047263 kubelet[3303]: I0513 23:59:35.046917 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6430271f-6738-43d4-933d-e7883da6d877-xtables-lock\") pod \"cilium-792rl\" (UID: \"6430271f-6738-43d4-933d-e7883da6d877\") " pod="kube-system/cilium-792rl" May 13 23:59:35.047263 kubelet[3303]: I0513 23:59:35.046990 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6430271f-6738-43d4-933d-e7883da6d877-cilium-run\") pod \"cilium-792rl\" (UID: \"6430271f-6738-43d4-933d-e7883da6d877\") " pod="kube-system/cilium-792rl" May 13 23:59:35.047263 kubelet[3303]: I0513 23:59:35.047023 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6430271f-6738-43d4-933d-e7883da6d877-bpf-maps\") pod \"cilium-792rl\" (UID: \"6430271f-6738-43d4-933d-e7883da6d877\") " pod="kube-system/cilium-792rl" May 13 23:59:35.047482 kubelet[3303]: I0513 23:59:35.047041 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6430271f-6738-43d4-933d-e7883da6d877-lib-modules\") pod \"cilium-792rl\" (UID: \"6430271f-6738-43d4-933d-e7883da6d877\") " pod="kube-system/cilium-792rl" May 13 23:59:35.047482 kubelet[3303]: I0513 23:59:35.047058 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6430271f-6738-43d4-933d-e7883da6d877-clustermesh-secrets\") pod \"cilium-792rl\" (UID: \"6430271f-6738-43d4-933d-e7883da6d877\") " pod="kube-system/cilium-792rl" May 13 23:59:35.047482 kubelet[3303]: I0513 23:59:35.047073 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6430271f-6738-43d4-933d-e7883da6d877-host-proc-sys-kernel\") pod \"cilium-792rl\" (UID: \"6430271f-6738-43d4-933d-e7883da6d877\") " pod="kube-system/cilium-792rl" May 13 23:59:35.047482 kubelet[3303]: I0513 23:59:35.047090 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6430271f-6738-43d4-933d-e7883da6d877-hubble-tls\") pod \"cilium-792rl\" (UID: \"6430271f-6738-43d4-933d-e7883da6d877\") " pod="kube-system/cilium-792rl" May 13 23:59:35.047482 kubelet[3303]: I0513 23:59:35.047106 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6430271f-6738-43d4-933d-e7883da6d877-hostproc\") pod \"cilium-792rl\" (UID: \"6430271f-6738-43d4-933d-e7883da6d877\") " pod="kube-system/cilium-792rl" May 13 23:59:35.047482 kubelet[3303]: I0513 23:59:35.047121 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6430271f-6738-43d4-933d-e7883da6d877-cilium-ipsec-secrets\") pod \"cilium-792rl\" (UID: \"6430271f-6738-43d4-933d-e7883da6d877\") " pod="kube-system/cilium-792rl" May 13 23:59:35.047631 kubelet[3303]: I0513 23:59:35.047168 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6430271f-6738-43d4-933d-e7883da6d877-cni-path\") pod \"cilium-792rl\" (UID: \"6430271f-6738-43d4-933d-e7883da6d877\") " pod="kube-system/cilium-792rl" May 13 23:59:35.047631 kubelet[3303]: I0513 23:59:35.047197 3303 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x8xc\" (UniqueName: \"kubernetes.io/projected/6430271f-6738-43d4-933d-e7883da6d877-kube-api-access-4x8xc\") pod \"cilium-792rl\" (UID: \"6430271f-6738-43d4-933d-e7883da6d877\") " pod="kube-system/cilium-792rl" May 13 23:59:35.116529 sshd[5258]: Accepted publickey for core from 147.75.109.163 port 56096 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:35.117943 sshd-session[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:35.123270 systemd-logind[1892]: New session 27 of user core. May 13 23:59:35.127430 systemd[1]: Started session-27.scope - Session 27 of User core. May 13 23:59:35.247996 sshd[5260]: Connection closed by 147.75.109.163 port 56096 May 13 23:59:35.249587 sshd-session[5258]: pam_unix(sshd:session): session closed for user core May 13 23:59:35.252483 systemd[1]: sshd@26-172.31.16.70:22-147.75.109.163:56096.service: Deactivated successfully. May 13 23:59:35.254632 systemd[1]: session-27.scope: Deactivated successfully. May 13 23:59:35.256300 systemd-logind[1892]: Session 27 logged out. Waiting for processes to exit. May 13 23:59:35.257716 systemd-logind[1892]: Removed session 27. May 13 23:59:35.261564 containerd[1904]: time="2025-05-13T23:59:35.261529251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-792rl,Uid:6430271f-6738-43d4-933d-e7883da6d877,Namespace:kube-system,Attempt:0,}" May 13 23:59:35.282542 systemd[1]: Started sshd@27-172.31.16.70:22-147.75.109.163:56110.service - OpenSSH per-connection server daemon (147.75.109.163:56110). May 13 23:59:35.294190 containerd[1904]: time="2025-05-13T23:59:35.293995962Z" level=info msg="connecting to shim eb3a6b2601cdffecf6b562881fa6e36d18162c75fa907c90639db5f6092ad156" address="unix:///run/containerd/s/a2769f72b7e6f2e3b80ccb6718c31b6c5de03beeb136c13324da6494f487c887" namespace=k8s.io protocol=ttrpc version=3 May 13 23:59:35.336409 systemd[1]: Started cri-containerd-eb3a6b2601cdffecf6b562881fa6e36d18162c75fa907c90639db5f6092ad156.scope - libcontainer container eb3a6b2601cdffecf6b562881fa6e36d18162c75fa907c90639db5f6092ad156. May 13 23:59:35.366721 containerd[1904]: time="2025-05-13T23:59:35.366394977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-792rl,Uid:6430271f-6738-43d4-933d-e7883da6d877,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb3a6b2601cdffecf6b562881fa6e36d18162c75fa907c90639db5f6092ad156\"" May 13 23:59:35.369829 containerd[1904]: time="2025-05-13T23:59:35.369731440Z" level=info msg="CreateContainer within sandbox \"eb3a6b2601cdffecf6b562881fa6e36d18162c75fa907c90639db5f6092ad156\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 23:59:35.387334 containerd[1904]: time="2025-05-13T23:59:35.386639151Z" level=info msg="Container 6ce59433a4f716fdb78d4e6ca92458392810602d64ae1f7865d17d1284ee9b0c: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:35.398619 containerd[1904]: time="2025-05-13T23:59:35.398518536Z" level=info msg="CreateContainer within sandbox \"eb3a6b2601cdffecf6b562881fa6e36d18162c75fa907c90639db5f6092ad156\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6ce59433a4f716fdb78d4e6ca92458392810602d64ae1f7865d17d1284ee9b0c\"" May 13 23:59:35.400236 containerd[1904]: time="2025-05-13T23:59:35.399195106Z" level=info msg="StartContainer for \"6ce59433a4f716fdb78d4e6ca92458392810602d64ae1f7865d17d1284ee9b0c\"" May 13 23:59:35.400236 containerd[1904]: time="2025-05-13T23:59:35.400136134Z" level=info msg="connecting to shim 6ce59433a4f716fdb78d4e6ca92458392810602d64ae1f7865d17d1284ee9b0c" address="unix:///run/containerd/s/a2769f72b7e6f2e3b80ccb6718c31b6c5de03beeb136c13324da6494f487c887" protocol=ttrpc version=3 May 13 23:59:35.417437 systemd[1]: Started cri-containerd-6ce59433a4f716fdb78d4e6ca92458392810602d64ae1f7865d17d1284ee9b0c.scope - libcontainer container 6ce59433a4f716fdb78d4e6ca92458392810602d64ae1f7865d17d1284ee9b0c. May 13 23:59:35.449930 containerd[1904]: time="2025-05-13T23:59:35.449893014Z" level=info msg="StartContainer for \"6ce59433a4f716fdb78d4e6ca92458392810602d64ae1f7865d17d1284ee9b0c\" returns successfully" May 13 23:59:35.468648 systemd[1]: cri-containerd-6ce59433a4f716fdb78d4e6ca92458392810602d64ae1f7865d17d1284ee9b0c.scope: Deactivated successfully. May 13 23:59:35.468910 systemd[1]: cri-containerd-6ce59433a4f716fdb78d4e6ca92458392810602d64ae1f7865d17d1284ee9b0c.scope: Consumed 22ms CPU time, 9.1M memory peak, 2.8M read from disk. May 13 23:59:35.472664 containerd[1904]: time="2025-05-13T23:59:35.472634055Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ce59433a4f716fdb78d4e6ca92458392810602d64ae1f7865d17d1284ee9b0c\" id:\"6ce59433a4f716fdb78d4e6ca92458392810602d64ae1f7865d17d1284ee9b0c\" pid:5331 exited_at:{seconds:1747180775 nanos:471010761}" May 13 23:59:35.472872 containerd[1904]: time="2025-05-13T23:59:35.472672133Z" level=info msg="received exit event container_id:\"6ce59433a4f716fdb78d4e6ca92458392810602d64ae1f7865d17d1284ee9b0c\" id:\"6ce59433a4f716fdb78d4e6ca92458392810602d64ae1f7865d17d1284ee9b0c\" pid:5331 exited_at:{seconds:1747180775 nanos:471010761}" May 13 23:59:35.485854 sshd[5271]: Accepted publickey for core from 147.75.109.163 port 56110 ssh2: RSA SHA256:jID1Ne0XtVuWHgpdBL4aGeETU1EYp3HBJN6uawHuOr4 May 13 23:59:35.487897 sshd-session[5271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:35.494701 systemd-logind[1892]: New session 28 of user core. May 13 23:59:35.501454 systemd[1]: Started session-28.scope - Session 28 of User core. May 13 23:59:36.051116 containerd[1904]: time="2025-05-13T23:59:36.051074325Z" level=info msg="CreateContainer within sandbox \"eb3a6b2601cdffecf6b562881fa6e36d18162c75fa907c90639db5f6092ad156\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 23:59:36.062820 containerd[1904]: time="2025-05-13T23:59:36.062778020Z" level=info msg="Container 53874bd72915c8437ca29cf969ef5c1057cb7fb2ab7f0f05a5bfcc559103117f: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:36.072853 containerd[1904]: time="2025-05-13T23:59:36.072814473Z" level=info msg="CreateContainer within sandbox \"eb3a6b2601cdffecf6b562881fa6e36d18162c75fa907c90639db5f6092ad156\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"53874bd72915c8437ca29cf969ef5c1057cb7fb2ab7f0f05a5bfcc559103117f\"" May 13 23:59:36.073729 containerd[1904]: time="2025-05-13T23:59:36.073352899Z" level=info msg="StartContainer for \"53874bd72915c8437ca29cf969ef5c1057cb7fb2ab7f0f05a5bfcc559103117f\"" May 13 23:59:36.074321 containerd[1904]: time="2025-05-13T23:59:36.074277639Z" level=info msg="connecting to shim 53874bd72915c8437ca29cf969ef5c1057cb7fb2ab7f0f05a5bfcc559103117f" address="unix:///run/containerd/s/a2769f72b7e6f2e3b80ccb6718c31b6c5de03beeb136c13324da6494f487c887" protocol=ttrpc version=3 May 13 23:59:36.096412 systemd[1]: Started cri-containerd-53874bd72915c8437ca29cf969ef5c1057cb7fb2ab7f0f05a5bfcc559103117f.scope - libcontainer container 53874bd72915c8437ca29cf969ef5c1057cb7fb2ab7f0f05a5bfcc559103117f. May 13 23:59:36.131174 containerd[1904]: time="2025-05-13T23:59:36.131033506Z" level=info msg="StartContainer for \"53874bd72915c8437ca29cf969ef5c1057cb7fb2ab7f0f05a5bfcc559103117f\" returns successfully" May 13 23:59:36.142340 systemd[1]: cri-containerd-53874bd72915c8437ca29cf969ef5c1057cb7fb2ab7f0f05a5bfcc559103117f.scope: Deactivated successfully. May 13 23:59:36.143096 containerd[1904]: time="2025-05-13T23:59:36.143048180Z" level=info msg="received exit event container_id:\"53874bd72915c8437ca29cf969ef5c1057cb7fb2ab7f0f05a5bfcc559103117f\" id:\"53874bd72915c8437ca29cf969ef5c1057cb7fb2ab7f0f05a5bfcc559103117f\" pid:5382 exited_at:{seconds:1747180776 nanos:142789664}" May 13 23:59:36.143087 systemd[1]: cri-containerd-53874bd72915c8437ca29cf969ef5c1057cb7fb2ab7f0f05a5bfcc559103117f.scope: Consumed 19ms CPU time, 7.3M memory peak, 2M read from disk. May 13 23:59:36.144449 containerd[1904]: time="2025-05-13T23:59:36.143667260Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53874bd72915c8437ca29cf969ef5c1057cb7fb2ab7f0f05a5bfcc559103117f\" id:\"53874bd72915c8437ca29cf969ef5c1057cb7fb2ab7f0f05a5bfcc559103117f\" pid:5382 exited_at:{seconds:1747180776 nanos:142789664}" May 13 23:59:36.155453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1041523933.mount: Deactivated successfully. May 13 23:59:36.172462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53874bd72915c8437ca29cf969ef5c1057cb7fb2ab7f0f05a5bfcc559103117f-rootfs.mount: Deactivated successfully. May 13 23:59:37.048764 containerd[1904]: time="2025-05-13T23:59:37.048667491Z" level=info msg="CreateContainer within sandbox \"eb3a6b2601cdffecf6b562881fa6e36d18162c75fa907c90639db5f6092ad156\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 23:59:37.066367 containerd[1904]: time="2025-05-13T23:59:37.064192716Z" level=info msg="Container 8d3d644a4a6954da5510c4c69aa4b852c3868308891db1fbc457cf7479c2807c: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:37.081952 containerd[1904]: time="2025-05-13T23:59:37.081363153Z" level=info msg="CreateContainer within sandbox \"eb3a6b2601cdffecf6b562881fa6e36d18162c75fa907c90639db5f6092ad156\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8d3d644a4a6954da5510c4c69aa4b852c3868308891db1fbc457cf7479c2807c\"" May 13 23:59:37.084489 containerd[1904]: time="2025-05-13T23:59:37.084448897Z" level=info msg="StartContainer for \"8d3d644a4a6954da5510c4c69aa4b852c3868308891db1fbc457cf7479c2807c\"" May 13 23:59:37.085939 containerd[1904]: time="2025-05-13T23:59:37.085898495Z" level=info msg="connecting to shim 8d3d644a4a6954da5510c4c69aa4b852c3868308891db1fbc457cf7479c2807c" address="unix:///run/containerd/s/a2769f72b7e6f2e3b80ccb6718c31b6c5de03beeb136c13324da6494f487c887" protocol=ttrpc version=3 May 13 23:59:37.114394 systemd[1]: Started cri-containerd-8d3d644a4a6954da5510c4c69aa4b852c3868308891db1fbc457cf7479c2807c.scope - libcontainer container 8d3d644a4a6954da5510c4c69aa4b852c3868308891db1fbc457cf7479c2807c. May 13 23:59:37.157419 containerd[1904]: time="2025-05-13T23:59:37.157303963Z" level=info msg="StartContainer for \"8d3d644a4a6954da5510c4c69aa4b852c3868308891db1fbc457cf7479c2807c\" returns successfully" May 13 23:59:37.166790 systemd[1]: cri-containerd-8d3d644a4a6954da5510c4c69aa4b852c3868308891db1fbc457cf7479c2807c.scope: Deactivated successfully. May 13 23:59:37.168271 containerd[1904]: time="2025-05-13T23:59:37.168097144Z" level=info msg="received exit event container_id:\"8d3d644a4a6954da5510c4c69aa4b852c3868308891db1fbc457cf7479c2807c\" id:\"8d3d644a4a6954da5510c4c69aa4b852c3868308891db1fbc457cf7479c2807c\" pid:5427 exited_at:{seconds:1747180777 nanos:167561502}" May 13 23:59:37.168271 containerd[1904]: time="2025-05-13T23:59:37.168233732Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d3d644a4a6954da5510c4c69aa4b852c3868308891db1fbc457cf7479c2807c\" id:\"8d3d644a4a6954da5510c4c69aa4b852c3868308891db1fbc457cf7479c2807c\" pid:5427 exited_at:{seconds:1747180777 nanos:167561502}" May 13 23:59:37.193386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d3d644a4a6954da5510c4c69aa4b852c3868308891db1fbc457cf7479c2807c-rootfs.mount: Deactivated successfully. May 13 23:59:38.055120 containerd[1904]: time="2025-05-13T23:59:38.055033009Z" level=info msg="CreateContainer within sandbox \"eb3a6b2601cdffecf6b562881fa6e36d18162c75fa907c90639db5f6092ad156\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 23:59:38.069730 containerd[1904]: time="2025-05-13T23:59:38.069684959Z" level=info msg="Container 9e83ecb7eb8d2225dc683fcf92e3015d3308e276c6563d1f1a2b5044fa4bc150: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:38.080542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2926355759.mount: Deactivated successfully. May 13 23:59:38.085109 containerd[1904]: time="2025-05-13T23:59:38.085070639Z" level=info msg="CreateContainer within sandbox \"eb3a6b2601cdffecf6b562881fa6e36d18162c75fa907c90639db5f6092ad156\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9e83ecb7eb8d2225dc683fcf92e3015d3308e276c6563d1f1a2b5044fa4bc150\"" May 13 23:59:38.085855 containerd[1904]: time="2025-05-13T23:59:38.085666990Z" level=info msg="StartContainer for \"9e83ecb7eb8d2225dc683fcf92e3015d3308e276c6563d1f1a2b5044fa4bc150\"" May 13 23:59:38.087493 containerd[1904]: time="2025-05-13T23:59:38.087254268Z" level=info msg="connecting to shim 9e83ecb7eb8d2225dc683fcf92e3015d3308e276c6563d1f1a2b5044fa4bc150" address="unix:///run/containerd/s/a2769f72b7e6f2e3b80ccb6718c31b6c5de03beeb136c13324da6494f487c887" protocol=ttrpc version=3 May 13 23:59:38.121777 systemd[1]: Started cri-containerd-9e83ecb7eb8d2225dc683fcf92e3015d3308e276c6563d1f1a2b5044fa4bc150.scope - libcontainer container 9e83ecb7eb8d2225dc683fcf92e3015d3308e276c6563d1f1a2b5044fa4bc150. May 13 23:59:38.147844 systemd[1]: cri-containerd-9e83ecb7eb8d2225dc683fcf92e3015d3308e276c6563d1f1a2b5044fa4bc150.scope: Deactivated successfully. May 13 23:59:38.149948 containerd[1904]: time="2025-05-13T23:59:38.149914852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e83ecb7eb8d2225dc683fcf92e3015d3308e276c6563d1f1a2b5044fa4bc150\" id:\"9e83ecb7eb8d2225dc683fcf92e3015d3308e276c6563d1f1a2b5044fa4bc150\" pid:5466 exited_at:{seconds:1747180778 nanos:148785092}" May 13 23:59:38.162470 containerd[1904]: time="2025-05-13T23:59:38.162396000Z" level=info msg="received exit event container_id:\"9e83ecb7eb8d2225dc683fcf92e3015d3308e276c6563d1f1a2b5044fa4bc150\" id:\"9e83ecb7eb8d2225dc683fcf92e3015d3308e276c6563d1f1a2b5044fa4bc150\" pid:5466 exited_at:{seconds:1747180778 nanos:148785092}" May 13 23:59:38.163996 containerd[1904]: time="2025-05-13T23:59:38.163859337Z" level=info msg="StartContainer for \"9e83ecb7eb8d2225dc683fcf92e3015d3308e276c6563d1f1a2b5044fa4bc150\" returns successfully" May 13 23:59:38.191778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e83ecb7eb8d2225dc683fcf92e3015d3308e276c6563d1f1a2b5044fa4bc150-rootfs.mount: Deactivated successfully. May 13 23:59:38.758922 kubelet[3303]: E0513 23:59:38.758873 3303 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 23:59:39.061661 containerd[1904]: time="2025-05-13T23:59:39.061538801Z" level=info msg="CreateContainer within sandbox \"eb3a6b2601cdffecf6b562881fa6e36d18162c75fa907c90639db5f6092ad156\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 23:59:39.087534 containerd[1904]: time="2025-05-13T23:59:39.086836115Z" level=info msg="Container 5819cc9595c09d2f57f2e8f36c34dc71e6012671804dd31412502f445122049c: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:39.103499 containerd[1904]: time="2025-05-13T23:59:39.103455041Z" level=info msg="CreateContainer within sandbox \"eb3a6b2601cdffecf6b562881fa6e36d18162c75fa907c90639db5f6092ad156\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5819cc9595c09d2f57f2e8f36c34dc71e6012671804dd31412502f445122049c\"" May 13 23:59:39.105336 containerd[1904]: time="2025-05-13T23:59:39.104171016Z" level=info msg="StartContainer for \"5819cc9595c09d2f57f2e8f36c34dc71e6012671804dd31412502f445122049c\"" May 13 23:59:39.105336 containerd[1904]: time="2025-05-13T23:59:39.104973960Z" level=info msg="connecting to shim 5819cc9595c09d2f57f2e8f36c34dc71e6012671804dd31412502f445122049c" address="unix:///run/containerd/s/a2769f72b7e6f2e3b80ccb6718c31b6c5de03beeb136c13324da6494f487c887" protocol=ttrpc version=3 May 13 23:59:39.129415 systemd[1]: Started cri-containerd-5819cc9595c09d2f57f2e8f36c34dc71e6012671804dd31412502f445122049c.scope - libcontainer container 5819cc9595c09d2f57f2e8f36c34dc71e6012671804dd31412502f445122049c. May 13 23:59:39.179890 containerd[1904]: time="2025-05-13T23:59:39.179845966Z" level=info msg="StartContainer for \"5819cc9595c09d2f57f2e8f36c34dc71e6012671804dd31412502f445122049c\" returns successfully" May 13 23:59:39.279748 containerd[1904]: time="2025-05-13T23:59:39.279708562Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5819cc9595c09d2f57f2e8f36c34dc71e6012671804dd31412502f445122049c\" id:\"fc80e76e12f483b707f98e4159b8e92a2c6dd357daa1bd4a35934ccaa7adfce3\" pid:5532 exited_at:{seconds:1747180779 nanos:279193547}" May 13 23:59:39.751298 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 13 23:59:40.081808 kubelet[3303]: I0513 23:59:40.081650 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-792rl" podStartSLOduration=6.081631405 podStartE2EDuration="6.081631405s" podCreationTimestamp="2025-05-13 23:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:59:40.080655638 +0000 UTC m=+91.623700457" watchObservedRunningTime="2025-05-13 23:59:40.081631405 +0000 UTC m=+91.624676225" May 13 23:59:41.338229 kubelet[3303]: I0513 23:59:41.335751 3303 setters.go:580] "Node became not ready" node="ip-172-31-16-70" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T23:59:41Z","lastTransitionTime":"2025-05-13T23:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 23:59:42.120478 containerd[1904]: time="2025-05-13T23:59:42.120363961Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5819cc9595c09d2f57f2e8f36c34dc71e6012671804dd31412502f445122049c\" id:\"4be3287b9568fb6c140ebe4d8096f52b4075d1e9e54e83e7c569fdf2d7c17707\" pid:5842 exit_status:1 exited_at:{seconds:1747180782 nanos:119107024}" May 13 23:59:42.707475 systemd-networkd[1825]: lxc_health: Link UP May 13 23:59:42.714734 systemd-networkd[1825]: lxc_health: Gained carrier May 13 23:59:42.715362 (udev-worker)[6048]: Network interface NamePolicy= disabled on kernel command line. May 13 23:59:44.324784 containerd[1904]: time="2025-05-13T23:59:44.324599889Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5819cc9595c09d2f57f2e8f36c34dc71e6012671804dd31412502f445122049c\" id:\"dc857cef292bac17ebac38e8215f449a0d69172e26a6aaf712292af86ae94254\" pid:6089 exited_at:{seconds:1747180784 nanos:322647518}" May 13 23:59:44.657408 systemd-networkd[1825]: lxc_health: Gained IPv6LL May 13 23:59:46.868124 containerd[1904]: time="2025-05-13T23:59:46.868072800Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5819cc9595c09d2f57f2e8f36c34dc71e6012671804dd31412502f445122049c\" id:\"56e332d24c3ce3dce9e46306aeff94c486db4a19e0857c14b359d62be02dc5c7\" pid:6116 exited_at:{seconds:1747180786 nanos:867525237}" May 13 23:59:46.893505 ntpd[1887]: Listen normally on 15 lxc_health [fe80::388e:94ff:feda:716a%14]:123 May 13 23:59:46.893925 ntpd[1887]: 13 May 23:59:46 ntpd[1887]: Listen normally on 15 lxc_health [fe80::388e:94ff:feda:716a%14]:123 May 13 23:59:48.982515 containerd[1904]: time="2025-05-13T23:59:48.982450152Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5819cc9595c09d2f57f2e8f36c34dc71e6012671804dd31412502f445122049c\" id:\"f2ab1fb0aa13aa784a8317ea2e9e18fd3d1962cf11d59ecffc5adb9f2aab4ab5\" pid:6145 exited_at:{seconds:1747180788 nanos:982092894}" May 13 23:59:51.121730 containerd[1904]: time="2025-05-13T23:59:51.121682135Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5819cc9595c09d2f57f2e8f36c34dc71e6012671804dd31412502f445122049c\" id:\"e01482290b2ead4a3f599ed07301e085fb3a50a92388232e31d6d57d7ab0bd1b\" pid:6166 exited_at:{seconds:1747180791 nanos:121370590}" May 13 23:59:51.148455 sshd[5363]: Connection closed by 147.75.109.163 port 56110 May 13 23:59:51.149867 sshd-session[5271]: pam_unix(sshd:session): session closed for user core May 13 23:59:51.153634 systemd-logind[1892]: Session 28 logged out. Waiting for processes to exit. May 13 23:59:51.154012 systemd[1]: sshd@27-172.31.16.70:22-147.75.109.163:56110.service: Deactivated successfully. May 13 23:59:51.156195 systemd[1]: session-28.scope: Deactivated successfully. May 13 23:59:51.157616 systemd-logind[1892]: Removed session 28. May 14 00:00:06.248445 systemd[1]: cri-containerd-61032fcbccf235a794991b0b066b722ae468317223c35be2e26348d61f35056b.scope: Deactivated successfully. May 14 00:00:06.248849 systemd[1]: cri-containerd-61032fcbccf235a794991b0b066b722ae468317223c35be2e26348d61f35056b.scope: Consumed 3.711s CPU time, 72.1M memory peak, 24.3M read from disk. May 14 00:00:06.250879 containerd[1904]: time="2025-05-14T00:00:06.250831900Z" level=info msg="received exit event container_id:\"61032fcbccf235a794991b0b066b722ae468317223c35be2e26348d61f35056b\" id:\"61032fcbccf235a794991b0b066b722ae468317223c35be2e26348d61f35056b\" pid:3134 exit_status:1 exited_at:{seconds:1747180806 nanos:250565800}" May 14 00:00:06.254336 containerd[1904]: time="2025-05-14T00:00:06.254275261Z" level=info msg="TaskExit event in podsandbox handler container_id:\"61032fcbccf235a794991b0b066b722ae468317223c35be2e26348d61f35056b\" id:\"61032fcbccf235a794991b0b066b722ae468317223c35be2e26348d61f35056b\" pid:3134 exit_status:1 exited_at:{seconds:1747180806 nanos:250565800}" May 14 00:00:06.256349 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. May 14 00:00:06.291997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61032fcbccf235a794991b0b066b722ae468317223c35be2e26348d61f35056b-rootfs.mount: Deactivated successfully. May 14 00:00:06.294138 systemd[1]: logrotate.service: Deactivated successfully. May 14 00:00:07.126154 kubelet[3303]: I0514 00:00:07.125470 3303 scope.go:117] "RemoveContainer" containerID="61032fcbccf235a794991b0b066b722ae468317223c35be2e26348d61f35056b" May 14 00:00:07.128807 containerd[1904]: time="2025-05-14T00:00:07.128748899Z" level=info msg="CreateContainer within sandbox \"848bfa66ca2717de93eca804a4db0c0725a12137f237df48722b7caf842a08d8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 14 00:00:07.155466 containerd[1904]: time="2025-05-14T00:00:07.153579170Z" level=info msg="Container 0133a047e3ea957bd5a82c8e991d4036e85c63623c0784c44f996b64b0ca52ac: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:07.156518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3407712314.mount: Deactivated successfully. May 14 00:00:07.169399 containerd[1904]: time="2025-05-14T00:00:07.169361891Z" level=info msg="CreateContainer within sandbox \"848bfa66ca2717de93eca804a4db0c0725a12137f237df48722b7caf842a08d8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0133a047e3ea957bd5a82c8e991d4036e85c63623c0784c44f996b64b0ca52ac\"" May 14 00:00:07.170093 containerd[1904]: time="2025-05-14T00:00:07.169883912Z" level=info msg="StartContainer for \"0133a047e3ea957bd5a82c8e991d4036e85c63623c0784c44f996b64b0ca52ac\"" May 14 00:00:07.171073 containerd[1904]: time="2025-05-14T00:00:07.171044531Z" level=info msg="connecting to shim 0133a047e3ea957bd5a82c8e991d4036e85c63623c0784c44f996b64b0ca52ac" address="unix:///run/containerd/s/5ef55dcef7d672683f77e669d2dd0694ef7bf9f84bf0ec3804c9a36cec6fced2" protocol=ttrpc version=3 May 14 00:00:07.191741 systemd[1]: Started cri-containerd-0133a047e3ea957bd5a82c8e991d4036e85c63623c0784c44f996b64b0ca52ac.scope - libcontainer container 0133a047e3ea957bd5a82c8e991d4036e85c63623c0784c44f996b64b0ca52ac. May 14 00:00:07.250955 containerd[1904]: time="2025-05-14T00:00:07.250895149Z" level=info msg="StartContainer for \"0133a047e3ea957bd5a82c8e991d4036e85c63623c0784c44f996b64b0ca52ac\" returns successfully" May 14 00:00:08.615790 containerd[1904]: time="2025-05-14T00:00:08.615746710Z" level=info msg="StopPodSandbox for \"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\"" May 14 00:00:08.616520 containerd[1904]: time="2025-05-14T00:00:08.615904794Z" level=info msg="TearDown network for sandbox \"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\" successfully" May 14 00:00:08.616520 containerd[1904]: time="2025-05-14T00:00:08.615916585Z" level=info msg="StopPodSandbox for \"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\" returns successfully" May 14 00:00:08.616520 containerd[1904]: time="2025-05-14T00:00:08.616379348Z" level=info msg="RemovePodSandbox for \"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\"" May 14 00:00:08.616520 containerd[1904]: time="2025-05-14T00:00:08.616416179Z" level=info msg="Forcibly stopping sandbox \"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\"" May 14 00:00:08.626298 containerd[1904]: time="2025-05-14T00:00:08.625484654Z" level=info msg="TearDown network for sandbox \"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\" successfully" May 14 00:00:08.632388 containerd[1904]: time="2025-05-14T00:00:08.632343135Z" level=info msg="Ensure that sandbox f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3 in task-service has been cleanup successfully" May 14 00:00:08.638884 containerd[1904]: time="2025-05-14T00:00:08.638725786Z" level=info msg="RemovePodSandbox \"f99eac2eab06719d04da324b670c9658166e576a5ed495a2771a78ab2f52a7d3\" returns successfully" May 14 00:00:08.640369 containerd[1904]: time="2025-05-14T00:00:08.639604498Z" level=info msg="StopPodSandbox for \"3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd\"" May 14 00:00:08.640369 containerd[1904]: time="2025-05-14T00:00:08.639746099Z" level=info msg="TearDown network for sandbox \"3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd\" successfully" May 14 00:00:08.640369 containerd[1904]: time="2025-05-14T00:00:08.639759543Z" level=info msg="StopPodSandbox for \"3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd\" returns successfully" May 14 00:00:08.640606 containerd[1904]: time="2025-05-14T00:00:08.640572281Z" level=info msg="RemovePodSandbox for \"3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd\"" May 14 00:00:08.640651 containerd[1904]: time="2025-05-14T00:00:08.640607570Z" level=info msg="Forcibly stopping sandbox \"3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd\"" May 14 00:00:08.640765 containerd[1904]: time="2025-05-14T00:00:08.640740430Z" level=info msg="TearDown network for sandbox \"3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd\" successfully" May 14 00:00:08.641962 containerd[1904]: time="2025-05-14T00:00:08.641932410Z" level=info msg="Ensure that sandbox 3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd in task-service has been cleanup successfully" May 14 00:00:08.649670 containerd[1904]: time="2025-05-14T00:00:08.649619532Z" level=info msg="RemovePodSandbox \"3e8917a45035fcaddd3277e9b00158b503f523c1216f327d088ed1b8f9448cfd\" returns successfully" May 14 00:00:10.632126 systemd[1]: cri-containerd-4ba9568668707a586736f3c82c2d36312c612192deca488b2a218330e225664e.scope: Deactivated successfully. May 14 00:00:10.632489 systemd[1]: cri-containerd-4ba9568668707a586736f3c82c2d36312c612192deca488b2a218330e225664e.scope: Consumed 1.925s CPU time, 27.1M memory peak, 8.6M read from disk. May 14 00:00:10.634671 containerd[1904]: time="2025-05-14T00:00:10.634424277Z" level=info msg="received exit event container_id:\"4ba9568668707a586736f3c82c2d36312c612192deca488b2a218330e225664e\" id:\"4ba9568668707a586736f3c82c2d36312c612192deca488b2a218330e225664e\" pid:3108 exit_status:1 exited_at:{seconds:1747180810 nanos:631809172}" May 14 00:00:10.634671 containerd[1904]: time="2025-05-14T00:00:10.634553309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4ba9568668707a586736f3c82c2d36312c612192deca488b2a218330e225664e\" id:\"4ba9568668707a586736f3c82c2d36312c612192deca488b2a218330e225664e\" pid:3108 exit_status:1 exited_at:{seconds:1747180810 nanos:631809172}" May 14 00:00:10.661849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ba9568668707a586736f3c82c2d36312c612192deca488b2a218330e225664e-rootfs.mount: Deactivated successfully. May 14 00:00:10.982058 kubelet[3303]: E0514 00:00:10.981071 3303 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-70?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 14 00:00:11.136893 kubelet[3303]: I0514 00:00:11.136858 3303 scope.go:117] "RemoveContainer" containerID="4ba9568668707a586736f3c82c2d36312c612192deca488b2a218330e225664e" May 14 00:00:11.139361 containerd[1904]: time="2025-05-14T00:00:11.139320031Z" level=info msg="CreateContainer within sandbox \"e060f7cd094f8a9f2f404d8cf29aa6dbe4ad8ba707e14333d8f156eafcf8e5bb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 14 00:00:11.156174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2822318028.mount: Deactivated successfully. May 14 00:00:11.156651 containerd[1904]: time="2025-05-14T00:00:11.156615479Z" level=info msg="Container 742cabae361e2bf45614ae9fb5c27e2ecd72210dc40b5eef84418dc00e12aa03: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:11.170061 containerd[1904]: time="2025-05-14T00:00:11.170009611Z" level=info msg="CreateContainer within sandbox \"e060f7cd094f8a9f2f404d8cf29aa6dbe4ad8ba707e14333d8f156eafcf8e5bb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"742cabae361e2bf45614ae9fb5c27e2ecd72210dc40b5eef84418dc00e12aa03\"" May 14 00:00:11.170747 containerd[1904]: time="2025-05-14T00:00:11.170713514Z" level=info msg="StartContainer for \"742cabae361e2bf45614ae9fb5c27e2ecd72210dc40b5eef84418dc00e12aa03\"" May 14 00:00:11.172049 containerd[1904]: time="2025-05-14T00:00:11.172016952Z" level=info msg="connecting to shim 742cabae361e2bf45614ae9fb5c27e2ecd72210dc40b5eef84418dc00e12aa03" address="unix:///run/containerd/s/1227867335d12f23d3f799886972f59c73754c330d5cc236eb96245a7ad10737" protocol=ttrpc version=3 May 14 00:00:11.199419 systemd[1]: Started cri-containerd-742cabae361e2bf45614ae9fb5c27e2ecd72210dc40b5eef84418dc00e12aa03.scope - libcontainer container 742cabae361e2bf45614ae9fb5c27e2ecd72210dc40b5eef84418dc00e12aa03. May 14 00:00:11.254780 containerd[1904]: time="2025-05-14T00:00:11.254633343Z" level=info msg="StartContainer for \"742cabae361e2bf45614ae9fb5c27e2ecd72210dc40b5eef84418dc00e12aa03\" returns successfully" May 14 00:00:20.982355 kubelet[3303]: E0514 00:00:20.981849 3303 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-70?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"