May 8 00:03:49.920838 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:19:27 -00 2025 May 8 00:03:49.920867 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:03:49.920882 kernel: BIOS-provided physical RAM map: May 8 00:03:49.920889 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 8 00:03:49.920896 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 8 00:03:49.920902 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 8 00:03:49.920910 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 8 00:03:49.920917 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 8 00:03:49.920923 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 8 00:03:49.920930 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 8 00:03:49.920936 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 8 00:03:49.920946 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 8 00:03:49.920956 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 8 00:03:49.920963 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 8 00:03:49.920974 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 8 00:03:49.920981 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 8 00:03:49.920991 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 8 00:03:49.920998 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 8 00:03:49.921005 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 8 00:03:49.921012 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 8 00:03:49.921019 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 8 00:03:49.921026 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 8 00:03:49.921033 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 8 00:03:49.921040 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 8 00:03:49.921047 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 8 00:03:49.921055 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 8 00:03:49.921062 kernel: NX (Execute Disable) protection: active May 8 00:03:49.921071 kernel: APIC: Static calls initialized May 8 00:03:49.921079 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 8 00:03:49.921086 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 8 00:03:49.921093 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 8 00:03:49.921100 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 8 00:03:49.921107 kernel: extended physical RAM map: May 8 00:03:49.921114 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 8 00:03:49.921121 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 8 00:03:49.921128 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 8 00:03:49.921136 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 8 00:03:49.921143 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 8 00:03:49.921150 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 8 00:03:49.921160 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 8 00:03:49.921171 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable May 8 00:03:49.921179 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable May 8 00:03:49.921186 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable May 8 00:03:49.921193 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable May 8 00:03:49.921200 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable May 8 00:03:49.921213 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 8 00:03:49.921220 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 8 00:03:49.921227 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 8 00:03:49.921235 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 8 00:03:49.921242 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 8 00:03:49.921250 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 8 00:03:49.921257 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 8 00:03:49.921264 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 8 00:03:49.921272 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 8 00:03:49.921282 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 8 00:03:49.921289 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 8 00:03:49.921296 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 8 00:03:49.921304 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 8 00:03:49.921313 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 8 00:03:49.921320 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 8 00:03:49.921327 kernel: efi: EFI v2.7 by EDK II May 8 00:03:49.921335 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 May 8 00:03:49.921342 kernel: random: crng init done May 8 00:03:49.921350 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 8 00:03:49.921365 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 8 00:03:49.921376 kernel: secureboot: Secure boot disabled May 8 00:03:49.921386 kernel: SMBIOS 2.8 present. May 8 00:03:49.921393 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 8 00:03:49.921401 kernel: Hypervisor detected: KVM May 8 00:03:49.921408 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 00:03:49.921415 kernel: kvm-clock: using sched offset of 3742244300 cycles May 8 00:03:49.921423 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 00:03:49.921431 kernel: tsc: Detected 2794.748 MHz processor May 8 00:03:49.921439 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:03:49.921447 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:03:49.921454 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 8 00:03:49.921464 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 8 00:03:49.921472 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:03:49.921480 kernel: Using GB pages for direct mapping May 8 00:03:49.921487 kernel: ACPI: Early table checksum verification disabled May 8 00:03:49.921495 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 8 00:03:49.921502 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 8 00:03:49.921510 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:03:49.921518 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:03:49.921525 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 8 00:03:49.921536 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:03:49.921543 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:03:49.921551 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:03:49.921559 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:03:49.921566 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 8 00:03:49.921574 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 8 00:03:49.921581 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 8 00:03:49.921589 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 8 00:03:49.921597 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 8 00:03:49.921607 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 8 00:03:49.921615 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 8 00:03:49.921622 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 8 00:03:49.921630 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 8 00:03:49.921637 kernel: No NUMA configuration found May 8 00:03:49.921645 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 8 00:03:49.921653 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] May 8 00:03:49.921660 kernel: Zone ranges: May 8 00:03:49.921668 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:03:49.921702 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 8 00:03:49.921710 kernel: Normal empty May 8 00:03:49.921720 kernel: Movable zone start for each node May 8 00:03:49.921727 kernel: Early memory node ranges May 8 00:03:49.921735 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 8 00:03:49.921742 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 8 00:03:49.921750 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 8 00:03:49.921757 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 8 00:03:49.921765 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 8 00:03:49.921775 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 8 00:03:49.921783 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] May 8 00:03:49.921790 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] May 8 00:03:49.921798 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 8 00:03:49.921805 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:03:49.921813 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 8 00:03:49.921829 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 8 00:03:49.921839 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:03:49.921847 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 8 00:03:49.921855 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 8 00:03:49.921863 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 8 00:03:49.921873 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 8 00:03:49.921883 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 8 00:03:49.921891 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 00:03:49.921899 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 00:03:49.921907 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 00:03:49.921915 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 00:03:49.921925 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 00:03:49.921933 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 00:03:49.921941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 00:03:49.921949 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 00:03:49.921957 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:03:49.921965 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 00:03:49.921972 kernel: TSC deadline timer available May 8 00:03:49.921980 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 8 00:03:49.921988 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 8 00:03:49.921999 kernel: kvm-guest: KVM setup pv remote TLB flush May 8 00:03:49.922006 kernel: kvm-guest: setup PV sched yield May 8 00:03:49.922014 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 8 00:03:49.922022 kernel: Booting paravirtualized kernel on KVM May 8 00:03:49.922030 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:03:49.922038 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 8 00:03:49.922046 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 May 8 00:03:49.922054 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 May 8 00:03:49.922061 kernel: pcpu-alloc: [0] 0 1 2 3 May 8 00:03:49.922072 kernel: kvm-guest: PV spinlocks enabled May 8 00:03:49.922079 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 8 00:03:49.922088 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:03:49.922097 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:03:49.922105 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:03:49.922115 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:03:49.922123 kernel: Fallback order for Node 0: 0 May 8 00:03:49.922131 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 May 8 00:03:49.922138 kernel: Policy zone: DMA32 May 8 00:03:49.922149 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:03:49.922159 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 177824K reserved, 0K cma-reserved) May 8 00:03:49.922169 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:03:49.922180 kernel: ftrace: allocating 37918 entries in 149 pages May 8 00:03:49.922190 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:03:49.922200 kernel: Dynamic Preempt: voluntary May 8 00:03:49.922211 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:03:49.922222 kernel: rcu: RCU event tracing is enabled. May 8 00:03:49.922237 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:03:49.922247 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:03:49.922258 kernel: Rude variant of Tasks RCU enabled. May 8 00:03:49.922269 kernel: Tracing variant of Tasks RCU enabled. May 8 00:03:49.922280 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:03:49.922290 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:03:49.922301 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 8 00:03:49.922312 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:03:49.922322 kernel: Console: colour dummy device 80x25 May 8 00:03:49.922331 kernel: printk: console [ttyS0] enabled May 8 00:03:49.922342 kernel: ACPI: Core revision 20230628 May 8 00:03:49.922350 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 8 00:03:49.922366 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:03:49.922374 kernel: x2apic enabled May 8 00:03:49.922382 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:03:49.922393 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 8 00:03:49.922401 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 8 00:03:49.922409 kernel: kvm-guest: setup PV IPIs May 8 00:03:49.922417 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:03:49.922427 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 00:03:49.922435 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 8 00:03:49.922443 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 8 00:03:49.922451 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 8 00:03:49.922459 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 8 00:03:49.922467 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:03:49.922475 kernel: Spectre V2 : Mitigation: Retpolines May 8 00:03:49.922483 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:03:49.922491 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 00:03:49.922501 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 8 00:03:49.922509 kernel: RETBleed: Mitigation: untrained return thunk May 8 00:03:49.922517 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:03:49.922525 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:03:49.922533 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 8 00:03:49.922541 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 8 00:03:49.922551 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 8 00:03:49.922559 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:03:49.922570 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:03:49.922578 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:03:49.922586 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:03:49.922594 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 8 00:03:49.922602 kernel: Freeing SMP alternatives memory: 32K May 8 00:03:49.922610 kernel: pid_max: default: 32768 minimum: 301 May 8 00:03:49.922617 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:03:49.922625 kernel: landlock: Up and running. May 8 00:03:49.922633 kernel: SELinux: Initializing. May 8 00:03:49.922644 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:03:49.922652 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:03:49.922660 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 8 00:03:49.922668 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:03:49.922688 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:03:49.922696 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:03:49.922704 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 8 00:03:49.922712 kernel: ... version: 0 May 8 00:03:49.922723 kernel: ... bit width: 48 May 8 00:03:49.922731 kernel: ... generic registers: 6 May 8 00:03:49.922739 kernel: ... value mask: 0000ffffffffffff May 8 00:03:49.922747 kernel: ... max period: 00007fffffffffff May 8 00:03:49.922754 kernel: ... fixed-purpose events: 0 May 8 00:03:49.922762 kernel: ... event mask: 000000000000003f May 8 00:03:49.922770 kernel: signal: max sigframe size: 1776 May 8 00:03:49.922778 kernel: rcu: Hierarchical SRCU implementation. May 8 00:03:49.922786 kernel: rcu: Max phase no-delay instances is 400. May 8 00:03:49.922793 kernel: smp: Bringing up secondary CPUs ... May 8 00:03:49.922804 kernel: smpboot: x86: Booting SMP configuration: May 8 00:03:49.922811 kernel: .... node #0, CPUs: #1 #2 #3 May 8 00:03:49.922819 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:03:49.922827 kernel: smpboot: Max logical packages: 1 May 8 00:03:49.922835 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 8 00:03:49.922843 kernel: devtmpfs: initialized May 8 00:03:49.922850 kernel: x86/mm: Memory block size: 128MB May 8 00:03:49.922858 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 8 00:03:49.922866 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 8 00:03:49.922877 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 8 00:03:49.922885 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 8 00:03:49.922894 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) May 8 00:03:49.922901 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 8 00:03:49.922909 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:03:49.922917 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:03:49.922925 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:03:49.922933 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:03:49.922941 kernel: audit: initializing netlink subsys (disabled) May 8 00:03:49.922952 kernel: audit: type=2000 audit(1746662629.108:1): state=initialized audit_enabled=0 res=1 May 8 00:03:49.922960 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:03:49.922967 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:03:49.922975 kernel: cpuidle: using governor menu May 8 00:03:49.922983 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:03:49.922991 kernel: dca service started, version 1.12.1 May 8 00:03:49.922999 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 8 00:03:49.923006 kernel: PCI: Using configuration type 1 for base access May 8 00:03:49.923014 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:03:49.923025 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:03:49.923033 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:03:49.923041 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:03:49.923048 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:03:49.923056 kernel: ACPI: Added _OSI(Module Device) May 8 00:03:49.923064 kernel: ACPI: Added _OSI(Processor Device) May 8 00:03:49.923072 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:03:49.923079 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:03:49.923087 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:03:49.923097 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:03:49.923105 kernel: ACPI: Interpreter enabled May 8 00:03:49.923113 kernel: ACPI: PM: (supports S0 S3 S5) May 8 00:03:49.923121 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:03:49.923129 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:03:49.923136 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:03:49.923144 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 8 00:03:49.923152 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:03:49.923373 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:03:49.923527 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 8 00:03:49.923659 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 8 00:03:49.923670 kernel: PCI host bridge to bus 0000:00 May 8 00:03:49.923824 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:03:49.923946 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 00:03:49.924065 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:03:49.924188 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 8 00:03:49.924305 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 8 00:03:49.924436 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 8 00:03:49.924554 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:03:49.924775 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 8 00:03:49.924923 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 8 00:03:49.925059 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 8 00:03:49.925186 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 8 00:03:49.925314 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 8 00:03:49.925450 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 8 00:03:49.925579 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:03:49.925754 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:03:49.925888 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 8 00:03:49.926023 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 8 00:03:49.926168 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] May 8 00:03:49.926321 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 8 00:03:49.926465 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 8 00:03:49.926598 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 8 00:03:49.926775 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] May 8 00:03:49.926922 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 8 00:03:49.927060 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 8 00:03:49.927190 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 8 00:03:49.927417 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] May 8 00:03:49.927573 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 8 00:03:49.927733 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 8 00:03:49.927865 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 8 00:03:49.928010 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 8 00:03:49.928148 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 8 00:03:49.928277 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 8 00:03:49.928437 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 8 00:03:49.928582 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 8 00:03:49.928594 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 00:03:49.928603 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 00:03:49.928612 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:03:49.928624 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 00:03:49.928632 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 8 00:03:49.928640 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 8 00:03:49.928648 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 8 00:03:49.928656 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 8 00:03:49.928664 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 8 00:03:49.928686 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 8 00:03:49.928695 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 8 00:03:49.928703 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 8 00:03:49.928714 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 8 00:03:49.928722 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 8 00:03:49.928730 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 8 00:03:49.928737 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 8 00:03:49.928745 kernel: iommu: Default domain type: Translated May 8 00:03:49.928753 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:03:49.928761 kernel: efivars: Registered efivars operations May 8 00:03:49.928769 kernel: PCI: Using ACPI for IRQ routing May 8 00:03:49.928777 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:03:49.928787 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 8 00:03:49.928795 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 8 00:03:49.928803 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] May 8 00:03:49.928810 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] May 8 00:03:49.928818 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 8 00:03:49.928826 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 8 00:03:49.928834 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] May 8 00:03:49.928842 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 8 00:03:49.928979 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 8 00:03:49.929108 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 8 00:03:49.929235 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:03:49.929245 kernel: vgaarb: loaded May 8 00:03:49.929254 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 8 00:03:49.929262 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 8 00:03:49.929270 kernel: clocksource: Switched to clocksource kvm-clock May 8 00:03:49.929278 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:03:49.929286 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:03:49.929298 kernel: pnp: PnP ACPI init May 8 00:03:49.929519 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 8 00:03:49.929532 kernel: pnp: PnP ACPI: found 6 devices May 8 00:03:49.929540 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:03:49.929548 kernel: NET: Registered PF_INET protocol family May 8 00:03:49.929579 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:03:49.929590 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:03:49.929599 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:03:49.929609 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:03:49.929617 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:03:49.929626 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:03:49.929634 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:03:49.929642 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:03:49.929651 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:03:49.929659 kernel: NET: Registered PF_XDP protocol family May 8 00:03:49.929807 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 8 00:03:49.929938 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 8 00:03:49.930064 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 00:03:49.930185 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 00:03:49.930303 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 00:03:49.930431 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 8 00:03:49.930557 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 8 00:03:49.930700 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 8 00:03:49.930712 kernel: PCI: CLS 0 bytes, default 64 May 8 00:03:49.930720 kernel: Initialise system trusted keyrings May 8 00:03:49.930733 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:03:49.930742 kernel: Key type asymmetric registered May 8 00:03:49.930750 kernel: Asymmetric key parser 'x509' registered May 8 00:03:49.930758 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:03:49.930766 kernel: io scheduler mq-deadline registered May 8 00:03:49.930774 kernel: io scheduler kyber registered May 8 00:03:49.930782 kernel: io scheduler bfq registered May 8 00:03:49.930790 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:03:49.930802 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 8 00:03:49.930823 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 8 00:03:49.930843 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 8 00:03:49.930853 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:03:49.930861 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:03:49.930870 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 00:03:49.930882 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:03:49.930891 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:03:49.931040 kernel: rtc_cmos 00:04: RTC can wake from S4 May 8 00:03:49.931053 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:03:49.931173 kernel: rtc_cmos 00:04: registered as rtc0 May 8 00:03:49.931293 kernel: rtc_cmos 00:04: setting system clock to 2025-05-08T00:03:49 UTC (1746662629) May 8 00:03:49.931425 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 8 00:03:49.931436 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 8 00:03:49.931449 kernel: efifb: probing for efifb May 8 00:03:49.931457 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 8 00:03:49.931465 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 8 00:03:49.931473 kernel: efifb: scrolling: redraw May 8 00:03:49.931481 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 8 00:03:49.931490 kernel: Console: switching to colour frame buffer device 160x50 May 8 00:03:49.931498 kernel: fb0: EFI VGA frame buffer device May 8 00:03:49.931506 kernel: pstore: Using crash dump compression: deflate May 8 00:03:49.931514 kernel: pstore: Registered efi_pstore as persistent store backend May 8 00:03:49.931525 kernel: NET: Registered PF_INET6 protocol family May 8 00:03:49.931533 kernel: Segment Routing with IPv6 May 8 00:03:49.931542 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:03:49.931550 kernel: NET: Registered PF_PACKET protocol family May 8 00:03:49.931558 kernel: Key type dns_resolver registered May 8 00:03:49.931566 kernel: IPI shorthand broadcast: enabled May 8 00:03:49.931574 kernel: sched_clock: Marking stable (793003348, 153206730)->(1013008542, -66798464) May 8 00:03:49.931583 kernel: registered taskstats version 1 May 8 00:03:49.931591 kernel: Loading compiled-in X.509 certificates May 8 00:03:49.931599 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: dac8423f6f9fa2fb5f636925d45d7c2572b3a9b6' May 8 00:03:49.931610 kernel: Key type .fscrypt registered May 8 00:03:49.931618 kernel: Key type fscrypt-provisioning registered May 8 00:03:49.931626 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:03:49.931634 kernel: ima: Allocated hash algorithm: sha1 May 8 00:03:49.931642 kernel: ima: No architecture policies found May 8 00:03:49.931651 kernel: clk: Disabling unused clocks May 8 00:03:49.931659 kernel: Freeing unused kernel image (initmem) memory: 43484K May 8 00:03:49.931667 kernel: Write protecting the kernel read-only data: 38912k May 8 00:03:49.931692 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 8 00:03:49.931700 kernel: Run /init as init process May 8 00:03:49.931708 kernel: with arguments: May 8 00:03:49.931716 kernel: /init May 8 00:03:49.931724 kernel: with environment: May 8 00:03:49.931732 kernel: HOME=/ May 8 00:03:49.931740 kernel: TERM=linux May 8 00:03:49.931748 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:03:49.931757 systemd[1]: Successfully made /usr/ read-only. May 8 00:03:49.931772 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:03:49.931782 systemd[1]: Detected virtualization kvm. May 8 00:03:49.931790 systemd[1]: Detected architecture x86-64. May 8 00:03:49.931799 systemd[1]: Running in initrd. May 8 00:03:49.931807 systemd[1]: No hostname configured, using default hostname. May 8 00:03:49.931816 systemd[1]: Hostname set to . May 8 00:03:49.931825 systemd[1]: Initializing machine ID from VM UUID. May 8 00:03:49.931836 systemd[1]: Queued start job for default target initrd.target. May 8 00:03:49.931845 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:03:49.931854 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:03:49.931863 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:03:49.931872 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:03:49.931881 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:03:49.931891 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:03:49.931904 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:03:49.931913 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:03:49.931922 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:03:49.931930 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:03:49.931939 systemd[1]: Reached target paths.target - Path Units. May 8 00:03:49.931948 systemd[1]: Reached target slices.target - Slice Units. May 8 00:03:49.931956 systemd[1]: Reached target swap.target - Swaps. May 8 00:03:49.931965 systemd[1]: Reached target timers.target - Timer Units. May 8 00:03:49.931974 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:03:49.931986 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:03:49.931994 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:03:49.932003 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 8 00:03:49.932012 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:03:49.932020 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:03:49.932029 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:03:49.932038 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:03:49.932047 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:03:49.932058 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:03:49.932067 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:03:49.932075 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:03:49.932084 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:03:49.932093 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:03:49.932101 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:03:49.932110 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:03:49.932119 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:03:49.932131 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:03:49.932140 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:03:49.932174 systemd-journald[194]: Collecting audit messages is disabled. May 8 00:03:49.932198 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:03:49.932207 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:03:49.932216 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:03:49.932225 systemd-journald[194]: Journal started May 8 00:03:49.932246 systemd-journald[194]: Runtime Journal (/run/log/journal/4be40e6c05d14f2cac8db1629e27a550) is 6M, max 48.2M, 42.2M free. May 8 00:03:49.930864 systemd-modules-load[195]: Inserted module 'overlay' May 8 00:03:49.934588 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:03:49.946872 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:03:49.948322 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:03:49.950288 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:03:49.961466 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:03:49.970307 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:03:49.972743 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:03:49.974950 systemd-modules-load[195]: Inserted module 'br_netfilter' May 8 00:03:49.975847 kernel: Bridge firewalling registered May 8 00:03:49.983901 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:03:49.984638 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:03:49.987571 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:03:49.999744 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:03:50.002089 dracut-cmdline[224]: dracut-dracut-053 May 8 00:03:50.008221 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:03:50.007824 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:03:50.046798 systemd-resolved[238]: Positive Trust Anchors: May 8 00:03:50.046811 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:03:50.046841 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:03:50.049340 systemd-resolved[238]: Defaulting to hostname 'linux'. May 8 00:03:50.050497 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:03:50.056009 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:03:50.111711 kernel: SCSI subsystem initialized May 8 00:03:50.120708 kernel: Loading iSCSI transport class v2.0-870. May 8 00:03:50.131700 kernel: iscsi: registered transport (tcp) May 8 00:03:50.153719 kernel: iscsi: registered transport (qla4xxx) May 8 00:03:50.153746 kernel: QLogic iSCSI HBA Driver May 8 00:03:50.203599 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:03:50.209893 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:03:50.235561 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:03:50.235599 kernel: device-mapper: uevent: version 1.0.3 May 8 00:03:50.235619 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:03:50.277706 kernel: raid6: avx2x4 gen() 30105 MB/s May 8 00:03:50.294694 kernel: raid6: avx2x2 gen() 30643 MB/s May 8 00:03:50.311807 kernel: raid6: avx2x1 gen() 25962 MB/s May 8 00:03:50.311838 kernel: raid6: using algorithm avx2x2 gen() 30643 MB/s May 8 00:03:50.329806 kernel: raid6: .... xor() 19831 MB/s, rmw enabled May 8 00:03:50.329845 kernel: raid6: using avx2x2 recovery algorithm May 8 00:03:50.351704 kernel: xor: automatically using best checksumming function avx May 8 00:03:50.500716 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:03:50.514146 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:03:50.526990 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:03:50.542978 systemd-udevd[415]: Using default interface naming scheme 'v255'. May 8 00:03:50.549401 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:03:50.557876 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:03:50.572990 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation May 8 00:03:50.609330 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:03:50.624848 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:03:50.695302 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:03:50.704839 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:03:50.723349 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:03:50.727369 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:03:50.730497 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:03:50.733332 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:03:50.742571 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 8 00:03:50.750124 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:03:50.752823 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:03:50.752846 kernel: GPT:9289727 != 19775487 May 8 00:03:50.752862 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:03:50.752876 kernel: GPT:9289727 != 19775487 May 8 00:03:50.752889 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:03:50.752903 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:03:50.752918 kernel: libata version 3.00 loaded. May 8 00:03:50.750530 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:03:50.757166 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:03:50.759700 kernel: ahci 0000:00:1f.2: version 3.0 May 8 00:03:50.781524 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 8 00:03:50.781709 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 8 00:03:50.781888 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 8 00:03:50.782043 kernel: scsi host0: ahci May 8 00:03:50.782223 kernel: scsi host1: ahci May 8 00:03:50.782394 kernel: scsi host2: ahci May 8 00:03:50.782583 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:03:50.782596 kernel: scsi host3: ahci May 8 00:03:50.782823 kernel: AES CTR mode by8 optimization enabled May 8 00:03:50.782836 kernel: scsi host4: ahci May 8 00:03:50.782998 kernel: scsi host5: ahci May 8 00:03:50.783156 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 8 00:03:50.783169 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 8 00:03:50.783180 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 8 00:03:50.783191 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 8 00:03:50.783202 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 8 00:03:50.783212 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 8 00:03:50.767954 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:03:50.790321 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:03:50.790520 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:03:50.809499 kernel: BTRFS: device fsid 1c9931ea-0995-4065-8a57-32743027822a devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (464) May 8 00:03:50.809712 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (469) May 8 00:03:50.792907 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:03:50.794807 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:03:50.795043 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:03:50.798255 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:03:50.807307 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:03:50.826186 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:03:50.849556 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 00:03:50.858482 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 00:03:50.871651 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 00:03:50.872111 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 00:03:50.880981 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:03:50.897807 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:03:50.900768 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:03:50.908981 disk-uuid[557]: Primary Header is updated. May 8 00:03:50.908981 disk-uuid[557]: Secondary Entries is updated. May 8 00:03:50.908981 disk-uuid[557]: Secondary Header is updated. May 8 00:03:50.913709 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:03:50.918702 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:03:50.922321 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:03:51.087706 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 8 00:03:51.087775 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 8 00:03:51.088710 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 8 00:03:51.090300 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 8 00:03:51.090336 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 8 00:03:51.090366 kernel: ata3.00: applying bridge limits May 8 00:03:51.091705 kernel: ata3.00: configured for UDMA/100 May 8 00:03:51.092708 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 8 00:03:51.096705 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 8 00:03:51.096721 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 8 00:03:51.147708 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 8 00:03:51.161332 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:03:51.161351 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:03:51.920700 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:03:51.921004 disk-uuid[559]: The operation has completed successfully. May 8 00:03:51.956159 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:03:51.956323 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:03:52.014837 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:03:52.018967 sh[594]: Success May 8 00:03:52.031702 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 8 00:03:52.069610 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:03:52.088233 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:03:52.092302 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:03:52.102148 kernel: BTRFS info (device dm-0): first mount of filesystem 1c9931ea-0995-4065-8a57-32743027822a May 8 00:03:52.102177 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:03:52.102188 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:03:52.103183 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:03:52.103925 kernel: BTRFS info (device dm-0): using free space tree May 8 00:03:52.108710 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:03:52.110207 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:03:52.116813 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:03:52.118482 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:03:52.138172 kernel: BTRFS info (device vda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:03:52.138215 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:03:52.138233 kernel: BTRFS info (device vda6): using free space tree May 8 00:03:52.141734 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:03:52.146715 kernel: BTRFS info (device vda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:03:52.151795 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:03:52.157913 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:03:52.211942 ignition[683]: Ignition 2.20.0 May 8 00:03:52.211956 ignition[683]: Stage: fetch-offline May 8 00:03:52.212010 ignition[683]: no configs at "/usr/lib/ignition/base.d" May 8 00:03:52.212024 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:03:52.212150 ignition[683]: parsed url from cmdline: "" May 8 00:03:52.212156 ignition[683]: no config URL provided May 8 00:03:52.212163 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:03:52.212176 ignition[683]: no config at "/usr/lib/ignition/user.ign" May 8 00:03:52.212210 ignition[683]: op(1): [started] loading QEMU firmware config module May 8 00:03:52.212217 ignition[683]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:03:52.225376 ignition[683]: op(1): [finished] loading QEMU firmware config module May 8 00:03:52.243599 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:03:52.251835 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:03:52.266864 ignition[683]: parsing config with SHA512: bf59cf2af661b97ec7bce4ba723180eac3a0b6b1ff99a29e071f1367edcbb91f170a2dfb483ebecec31df09343aea792cd81762a7444e07663ea80f7d999b3dc May 8 00:03:52.270526 unknown[683]: fetched base config from "system" May 8 00:03:52.271028 unknown[683]: fetched user config from "qemu" May 8 00:03:52.271868 ignition[683]: fetch-offline: fetch-offline passed May 8 00:03:52.271975 ignition[683]: Ignition finished successfully May 8 00:03:52.274699 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:03:52.281077 systemd-networkd[780]: lo: Link UP May 8 00:03:52.281088 systemd-networkd[780]: lo: Gained carrier May 8 00:03:52.282880 systemd-networkd[780]: Enumeration completed May 8 00:03:52.282981 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:03:52.283253 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:03:52.283258 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:03:52.284646 systemd-networkd[780]: eth0: Link UP May 8 00:03:52.284650 systemd-networkd[780]: eth0: Gained carrier May 8 00:03:52.284657 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:03:52.285361 systemd[1]: Reached target network.target - Network. May 8 00:03:52.287153 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:03:52.298796 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:03:52.302744 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:03:52.318052 ignition[784]: Ignition 2.20.0 May 8 00:03:52.318063 ignition[784]: Stage: kargs May 8 00:03:52.318219 ignition[784]: no configs at "/usr/lib/ignition/base.d" May 8 00:03:52.318230 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:03:52.319105 ignition[784]: kargs: kargs passed May 8 00:03:52.319149 ignition[784]: Ignition finished successfully May 8 00:03:52.323140 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:03:52.334826 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:03:52.348365 ignition[793]: Ignition 2.20.0 May 8 00:03:52.348375 ignition[793]: Stage: disks May 8 00:03:52.348532 ignition[793]: no configs at "/usr/lib/ignition/base.d" May 8 00:03:52.348543 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:03:52.349383 ignition[793]: disks: disks passed May 8 00:03:52.351613 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:03:52.349424 ignition[793]: Ignition finished successfully May 8 00:03:52.352884 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:03:52.354402 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:03:52.356530 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:03:52.357542 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:03:52.359219 systemd[1]: Reached target basic.target - Basic System. May 8 00:03:52.371827 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:03:52.384688 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:03:52.391783 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:03:52.410790 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:03:52.494699 kernel: EXT4-fs (vda9): mounted filesystem 369e2962-701e-4244-8c1c-27f8fa83bc64 r/w with ordered data mode. Quota mode: none. May 8 00:03:52.495884 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:03:52.497787 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:03:52.509764 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:03:52.511544 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:03:52.513131 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:03:52.521600 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (812) May 8 00:03:52.521627 kernel: BTRFS info (device vda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:03:52.521642 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:03:52.521656 kernel: BTRFS info (device vda6): using free space tree May 8 00:03:52.513176 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:03:52.524747 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:03:52.513200 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:03:52.526025 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:03:52.531540 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:03:52.543803 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:03:52.577114 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:03:52.581508 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory May 8 00:03:52.586270 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:03:52.590578 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:03:52.677160 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:03:52.688759 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:03:52.692210 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:03:52.697699 kernel: BTRFS info (device vda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:03:52.717459 ignition[926]: INFO : Ignition 2.20.0 May 8 00:03:52.717459 ignition[926]: INFO : Stage: mount May 8 00:03:52.719377 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:03:52.719377 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:03:52.719377 ignition[926]: INFO : mount: mount passed May 8 00:03:52.719377 ignition[926]: INFO : Ignition finished successfully May 8 00:03:52.723286 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:03:52.725405 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:03:52.742762 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:03:53.101556 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:03:53.111002 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:03:53.120418 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (940) May 8 00:03:53.120502 kernel: BTRFS info (device vda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:03:53.120515 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:03:53.121431 kernel: BTRFS info (device vda6): using free space tree May 8 00:03:53.124703 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:03:53.126756 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:03:53.145907 ignition[957]: INFO : Ignition 2.20.0 May 8 00:03:53.145907 ignition[957]: INFO : Stage: files May 8 00:03:53.148040 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:03:53.148040 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:03:53.148040 ignition[957]: DEBUG : files: compiled without relabeling support, skipping May 8 00:03:53.152233 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:03:53.152233 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:03:53.152233 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:03:53.152233 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:03:53.152233 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:03:53.151019 unknown[957]: wrote ssh authorized keys file for user: core May 8 00:03:53.161494 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:03:53.161494 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 8 00:03:53.188858 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:03:53.295927 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:03:53.295927 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:03:53.299933 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 8 00:03:53.664209 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:03:53.755848 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:03:53.755848 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 00:03:53.760307 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:03:53.760307 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:03:53.760307 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:03:53.760307 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:03:53.760307 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:03:53.760307 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:03:53.760307 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:03:53.760307 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:03:53.760307 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:03:53.760307 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:03:53.760307 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:03:53.760307 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:03:53.760307 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 8 00:03:54.155629 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 00:03:54.264990 systemd-networkd[780]: eth0: Gained IPv6LL May 8 00:03:54.521413 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 00:03:54.521413 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 00:03:54.525217 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:03:54.527417 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:03:54.527417 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 00:03:54.527417 ignition[957]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 8 00:03:54.531759 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:03:54.533660 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:03:54.533660 ignition[957]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 8 00:03:54.533660 ignition[957]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:03:54.553129 ignition[957]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:03:54.557701 ignition[957]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:03:54.559411 ignition[957]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:03:54.559411 ignition[957]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 8 00:03:54.562239 ignition[957]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:03:54.563706 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:03:54.565481 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:03:54.567222 ignition[957]: INFO : files: files passed May 8 00:03:54.567222 ignition[957]: INFO : Ignition finished successfully May 8 00:03:54.571280 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:03:54.584988 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:03:54.588251 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:03:54.591019 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:03:54.592052 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:03:54.599035 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory May 8 00:03:54.603207 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:03:54.603207 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:03:54.606455 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:03:54.610738 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:03:54.612326 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:03:54.620986 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:03:54.646522 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:03:54.647657 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:03:54.650366 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:03:54.652404 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:03:54.654501 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:03:54.666003 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:03:54.680303 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:03:54.698986 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:03:54.710897 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:03:54.711253 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:03:54.713628 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:03:54.713942 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:03:54.714061 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:03:54.717738 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:03:54.718236 systemd[1]: Stopped target basic.target - Basic System. May 8 00:03:54.718566 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:03:54.719061 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:03:54.719403 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:03:54.719758 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:03:54.720249 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:03:54.720593 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:03:54.721089 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:03:54.721444 systemd[1]: Stopped target swap.target - Swaps. May 8 00:03:54.721917 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:03:54.722028 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:03:54.739321 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:03:54.739667 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:03:54.740129 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:03:54.746228 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:03:54.748751 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:03:54.748872 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:03:54.751667 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:03:54.751807 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:03:54.752408 systemd[1]: Stopped target paths.target - Path Units. May 8 00:03:54.752653 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:03:54.760775 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:03:54.761145 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:03:54.763788 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:03:54.764107 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:03:54.764210 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:03:54.767145 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:03:54.767245 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:03:54.769078 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:03:54.769200 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:03:54.771071 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:03:54.771181 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:03:54.782810 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:03:54.783276 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:03:54.783396 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:03:54.784520 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:03:54.787075 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:03:54.787276 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:03:54.789149 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:03:54.789312 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:03:54.794016 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:03:54.794131 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:03:54.810772 ignition[1012]: INFO : Ignition 2.20.0 May 8 00:03:54.810772 ignition[1012]: INFO : Stage: umount May 8 00:03:54.814576 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:03:54.814576 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:03:54.812216 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:03:54.818635 ignition[1012]: INFO : umount: umount passed May 8 00:03:54.819613 ignition[1012]: INFO : Ignition finished successfully May 8 00:03:54.822966 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:03:54.823105 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:03:54.823913 systemd[1]: Stopped target network.target - Network. May 8 00:03:54.824310 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:03:54.824363 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:03:54.824744 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:03:54.824792 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:03:54.825064 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:03:54.825112 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:03:54.825406 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:03:54.825450 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:03:54.826026 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:03:54.826377 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:03:54.844830 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:03:54.845942 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:03:54.849779 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 8 00:03:54.851293 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:03:54.851344 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:03:54.861805 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:03:54.863697 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:03:54.863757 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:03:54.867179 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:03:54.868704 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:03:54.868827 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:03:54.881845 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 8 00:03:54.882183 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:03:54.884546 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:03:54.897454 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:03:54.897523 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:03:54.900646 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:03:54.900699 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:03:54.903745 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:03:54.903807 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:03:54.906968 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:03:54.907030 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:03:54.909987 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:03:54.910047 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:03:54.949935 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:03:54.950222 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:03:54.950297 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:03:54.953986 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:03:54.954041 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:03:54.954421 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:03:54.954468 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:03:54.959009 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:03:54.959059 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:03:54.962385 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:03:54.962435 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:03:54.966605 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:03:54.966672 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 8 00:03:54.966750 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 00:03:54.966802 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 8 00:03:54.967244 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:03:54.967367 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:03:54.968431 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:03:54.968540 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:03:55.047437 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:03:55.047590 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:03:55.049761 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:03:55.051404 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:03:55.051463 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:03:55.063816 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:03:55.071285 systemd[1]: Switching root. May 8 00:03:55.107309 systemd-journald[194]: Journal stopped May 8 00:03:56.605989 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). May 8 00:03:56.606070 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:03:56.606091 kernel: SELinux: policy capability open_perms=1 May 8 00:03:56.606103 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:03:56.606114 kernel: SELinux: policy capability always_check_network=0 May 8 00:03:56.606128 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:03:56.606140 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:03:56.606156 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:03:56.606175 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:03:56.606187 kernel: audit: type=1403 audit(1746662635.699:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:03:56.606203 systemd[1]: Successfully loaded SELinux policy in 42.319ms. May 8 00:03:56.606235 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.190ms. May 8 00:03:56.606249 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:03:56.606262 systemd[1]: Detected virtualization kvm. May 8 00:03:56.606274 systemd[1]: Detected architecture x86-64. May 8 00:03:56.606287 systemd[1]: Detected first boot. May 8 00:03:56.606302 systemd[1]: Initializing machine ID from VM UUID. May 8 00:03:56.606314 zram_generator::config[1059]: No configuration found. May 8 00:03:56.606329 kernel: Guest personality initialized and is inactive May 8 00:03:56.606341 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 8 00:03:56.606353 kernel: Initialized host personality May 8 00:03:56.606364 kernel: NET: Registered PF_VSOCK protocol family May 8 00:03:56.606377 systemd[1]: Populated /etc with preset unit settings. May 8 00:03:56.606392 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 8 00:03:56.606408 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:03:56.606421 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:03:56.606439 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:03:56.606452 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:03:56.606465 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:03:56.606477 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:03:56.606490 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:03:56.606502 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:03:56.606515 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:03:56.606531 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:03:56.606543 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:03:56.606559 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:03:56.606576 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:03:56.606589 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:03:56.606602 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:03:56.606615 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:03:56.606628 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:03:56.606644 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:03:56.606660 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:03:56.606685 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:03:56.606699 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:03:56.606712 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:03:56.606724 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:03:56.606739 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:03:56.606757 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:03:56.606773 systemd[1]: Reached target slices.target - Slice Units. May 8 00:03:56.606789 systemd[1]: Reached target swap.target - Swaps. May 8 00:03:56.606802 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:03:56.606817 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:03:56.606829 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 8 00:03:56.606842 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:03:56.606855 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:03:56.606867 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:03:56.606879 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:03:56.606898 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:03:56.606915 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:03:56.606929 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:03:56.606942 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:03:56.606955 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:03:56.606968 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:03:56.606980 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:03:56.606993 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:03:56.607006 systemd[1]: Reached target machines.target - Containers. May 8 00:03:56.607023 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:03:56.607038 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:03:56.607050 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:03:56.607063 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:03:56.607076 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:03:56.607088 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:03:56.607101 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:03:56.607114 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:03:56.607130 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:03:56.607143 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:03:56.607156 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:03:56.607184 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:03:56.607196 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:03:56.607209 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:03:56.607223 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:03:56.607235 kernel: fuse: init (API version 7.39) May 8 00:03:56.607250 kernel: loop: module loaded May 8 00:03:56.607262 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:03:56.607275 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:03:56.607288 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:03:56.607300 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:03:56.607312 kernel: ACPI: bus type drm_connector registered May 8 00:03:56.607326 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 8 00:03:56.607358 systemd-journald[1137]: Collecting audit messages is disabled. May 8 00:03:56.607389 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:03:56.607402 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:03:56.607414 systemd-journald[1137]: Journal started May 8 00:03:56.607439 systemd-journald[1137]: Runtime Journal (/run/log/journal/4be40e6c05d14f2cac8db1629e27a550) is 6M, max 48.2M, 42.2M free. May 8 00:03:56.344944 systemd[1]: Queued start job for default target multi-user.target. May 8 00:03:56.607732 systemd[1]: Stopped verity-setup.service. May 8 00:03:56.366087 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 00:03:56.366696 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:03:56.612913 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:03:56.619714 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:03:56.621055 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:03:56.622305 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:03:56.623582 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:03:56.624743 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:03:56.626002 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:03:56.627322 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:03:56.628808 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:03:56.630471 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:03:56.632272 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:03:56.632573 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:03:56.634194 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:03:56.634493 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:03:56.636128 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:03:56.636462 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:03:56.638191 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:03:56.638491 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:03:56.640154 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:03:56.640457 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:03:56.642185 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:03:56.642482 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:03:56.644076 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:03:56.645735 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:03:56.647415 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:03:56.649042 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 8 00:03:56.663047 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:03:56.679865 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:03:56.682717 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:03:56.683898 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:03:56.683937 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:03:56.686426 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 8 00:03:56.689340 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:03:56.695053 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:03:56.696915 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:03:56.698815 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:03:56.701831 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:03:56.703198 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:03:56.705895 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:03:56.707215 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:03:56.713741 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:03:56.716035 systemd-journald[1137]: Time spent on flushing to /var/log/journal/4be40e6c05d14f2cac8db1629e27a550 is 13.313ms for 1054 entries. May 8 00:03:56.716035 systemd-journald[1137]: System Journal (/var/log/journal/4be40e6c05d14f2cac8db1629e27a550) is 8M, max 195.6M, 187.6M free. May 8 00:03:56.745037 systemd-journald[1137]: Received client request to flush runtime journal. May 8 00:03:56.745085 kernel: loop0: detected capacity change from 0 to 147912 May 8 00:03:56.717040 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:03:56.721830 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:03:56.727758 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:03:56.730990 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:03:56.732635 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:03:56.734280 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:03:56.736961 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:03:56.743301 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:03:56.752926 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 8 00:03:56.757838 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:03:56.760492 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:03:56.762437 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:03:56.774701 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:03:56.776461 udevadm[1191]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 00:03:56.779863 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 8 00:03:56.789229 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:03:56.795863 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:03:56.802777 kernel: loop1: detected capacity change from 0 to 205544 May 8 00:03:56.818305 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. May 8 00:03:56.818325 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. May 8 00:03:56.825548 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:03:56.838827 kernel: loop2: detected capacity change from 0 to 138176 May 8 00:03:56.879770 kernel: loop3: detected capacity change from 0 to 147912 May 8 00:03:56.891707 kernel: loop4: detected capacity change from 0 to 205544 May 8 00:03:56.901734 kernel: loop5: detected capacity change from 0 to 138176 May 8 00:03:56.915361 (sd-merge)[1204]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 00:03:56.916030 (sd-merge)[1204]: Merged extensions into '/usr'. May 8 00:03:56.920903 systemd[1]: Reload requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:03:56.920924 systemd[1]: Reloading... May 8 00:03:56.996718 zram_generator::config[1238]: No configuration found. May 8 00:03:57.042222 ldconfig[1174]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:03:57.116249 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:03:57.182111 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:03:57.182248 systemd[1]: Reloading finished in 260 ms. May 8 00:03:57.205032 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:03:57.206651 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:03:57.231280 systemd[1]: Starting ensure-sysext.service... May 8 00:03:57.233314 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:03:57.256255 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:03:57.256623 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:03:57.257022 systemd[1]: Reload requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... May 8 00:03:57.257036 systemd[1]: Reloading... May 8 00:03:57.257766 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:03:57.258054 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 8 00:03:57.258133 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 8 00:03:57.265792 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:03:57.265879 systemd-tmpfiles[1270]: Skipping /boot May 8 00:03:57.279933 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:03:57.280010 systemd-tmpfiles[1270]: Skipping /boot May 8 00:03:57.318697 zram_generator::config[1299]: No configuration found. May 8 00:03:57.439943 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:03:57.506906 systemd[1]: Reloading finished in 249 ms. May 8 00:03:57.522717 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:03:57.537598 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:03:57.547336 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:03:57.549884 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:03:57.552584 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:03:57.557590 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:03:57.561219 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:03:57.565817 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:03:57.570168 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:03:57.570341 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:03:57.572071 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:03:57.582930 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:03:57.585976 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:03:57.587218 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:03:57.587501 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:03:57.591964 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:03:57.593315 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:03:57.594930 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:03:57.597162 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:03:57.597391 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:03:57.599434 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:03:57.599784 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:03:57.601815 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:03:57.602193 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:03:57.606084 systemd-udevd[1348]: Using default interface naming scheme 'v255'. May 8 00:03:57.610831 augenrules[1368]: No rules May 8 00:03:57.613290 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:03:57.613571 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:03:57.619637 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:03:57.625634 systemd[1]: Finished ensure-sysext.service. May 8 00:03:57.627455 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:03:57.632835 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:03:57.634056 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:03:57.636883 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:03:57.640346 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:03:57.644829 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:03:57.649839 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:03:57.651033 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:03:57.651077 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:03:57.654841 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:03:57.656833 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:03:57.659699 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:03:57.660377 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:03:57.663948 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:03:57.666953 augenrules[1379]: /sbin/augenrules: No change May 8 00:03:57.669129 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:03:57.671074 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:03:57.671320 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:03:57.673659 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:03:57.674051 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:03:57.676045 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:03:57.676277 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:03:57.678079 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:03:57.678366 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:03:57.681069 augenrules[1419]: No rules May 8 00:03:57.682366 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:03:57.682628 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:03:57.689460 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:03:57.708036 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 00:03:57.715926 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:03:57.716996 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:03:57.717074 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:03:57.717099 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:03:57.752703 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1394) May 8 00:03:57.789618 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:03:57.799226 systemd-resolved[1342]: Positive Trust Anchors: May 8 00:03:57.799410 systemd-resolved[1342]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:03:57.799442 systemd-resolved[1342]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:03:57.803066 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:03:57.806489 systemd-resolved[1342]: Defaulting to hostname 'linux'. May 8 00:03:57.807698 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:03:57.808226 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:03:57.809560 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:03:57.810831 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:03:57.812629 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:03:57.814699 kernel: ACPI: button: Power Button [PWRF] May 8 00:03:57.820220 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:03:57.837754 systemd-networkd[1438]: lo: Link UP May 8 00:03:57.838153 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 8 00:03:57.841562 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 8 00:03:57.841784 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:03:57.841799 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 8 00:03:57.841992 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 8 00:03:57.837762 systemd-networkd[1438]: lo: Gained carrier May 8 00:03:57.841247 systemd-networkd[1438]: Enumeration completed May 8 00:03:57.841367 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:03:57.841641 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:03:57.841646 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:03:57.842707 systemd-networkd[1438]: eth0: Link UP May 8 00:03:57.842711 systemd-networkd[1438]: eth0: Gained carrier May 8 00:03:57.842725 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:03:57.842951 systemd[1]: Reached target network.target - Network. May 8 00:03:57.854868 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 8 00:03:57.856828 systemd-networkd[1438]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:03:57.858832 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:03:57.859051 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. May 8 00:03:57.859810 systemd-timesyncd[1400]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:03:57.859859 systemd-timesyncd[1400]: Initial clock synchronization to Thu 2025-05-08 00:03:58.177893 UTC. May 8 00:03:57.898857 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:03:57.900629 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 8 00:03:57.916434 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:03:57.916821 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:03:57.946305 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 00:03:57.962704 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:03:57.965835 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:03:57.973888 kernel: kvm_amd: TSC scaling supported May 8 00:03:57.973942 kernel: kvm_amd: Nested Virtualization enabled May 8 00:03:57.973961 kernel: kvm_amd: Nested Paging enabled May 8 00:03:57.973980 kernel: kvm_amd: LBR virtualization supported May 8 00:03:57.975012 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 8 00:03:57.975078 kernel: kvm_amd: Virtual GIF supported May 8 00:03:57.994736 kernel: EDAC MC: Ver: 3.0.0 May 8 00:03:58.024628 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:03:58.029115 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:03:58.041861 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:03:58.050600 lvm[1471]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:03:58.081219 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:03:58.082835 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:03:58.084044 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:03:58.085291 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:03:58.086633 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:03:58.088326 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:03:58.089607 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:03:58.090954 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:03:58.092265 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:03:58.092295 systemd[1]: Reached target paths.target - Path Units. May 8 00:03:58.093269 systemd[1]: Reached target timers.target - Timer Units. May 8 00:03:58.095132 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:03:58.098014 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:03:58.101978 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 8 00:03:58.103508 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 8 00:03:58.104863 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 8 00:03:58.113205 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:03:58.114665 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 8 00:03:58.117143 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:03:58.118834 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:03:58.120027 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:03:58.121018 systemd[1]: Reached target basic.target - Basic System. May 8 00:03:58.122008 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:03:58.122039 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:03:58.123080 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:03:58.125302 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:03:58.128740 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:03:58.129082 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:03:58.135047 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:03:58.136260 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:03:58.139811 jq[1478]: false May 8 00:03:58.139876 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:03:58.142892 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:03:58.149335 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:03:58.152647 dbus-daemon[1477]: [system] SELinux support is enabled May 8 00:03:58.158449 extend-filesystems[1479]: Found loop3 May 8 00:03:58.158449 extend-filesystems[1479]: Found loop4 May 8 00:03:58.158449 extend-filesystems[1479]: Found loop5 May 8 00:03:58.158449 extend-filesystems[1479]: Found sr0 May 8 00:03:58.158449 extend-filesystems[1479]: Found vda May 8 00:03:58.158449 extend-filesystems[1479]: Found vda1 May 8 00:03:58.158449 extend-filesystems[1479]: Found vda2 May 8 00:03:58.158449 extend-filesystems[1479]: Found vda3 May 8 00:03:58.158449 extend-filesystems[1479]: Found usr May 8 00:03:58.158449 extend-filesystems[1479]: Found vda4 May 8 00:03:58.158449 extend-filesystems[1479]: Found vda6 May 8 00:03:58.158449 extend-filesystems[1479]: Found vda7 May 8 00:03:58.158449 extend-filesystems[1479]: Found vda9 May 8 00:03:58.158449 extend-filesystems[1479]: Checking size of /dev/vda9 May 8 00:03:58.155979 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:03:58.171952 extend-filesystems[1479]: Resized partition /dev/vda9 May 8 00:03:58.173186 extend-filesystems[1496]: resize2fs 1.47.1 (20-May-2024) May 8 00:03:58.177728 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:03:58.179936 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:03:58.184854 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:03:58.185503 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:03:58.188804 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1416) May 8 00:03:58.186971 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:03:58.191829 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:03:58.194855 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:03:58.203379 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:03:58.204720 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:03:58.213729 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:03:58.229853 jq[1501]: true May 8 00:03:58.214090 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:03:58.214481 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:03:58.214970 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:03:58.221217 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:03:58.223777 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:03:58.232525 update_engine[1500]: I20250508 00:03:58.232141 1500 main.cc:92] Flatcar Update Engine starting May 8 00:03:58.233203 extend-filesystems[1496]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:03:58.233203 extend-filesystems[1496]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:03:58.233203 extend-filesystems[1496]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:03:58.237787 extend-filesystems[1479]: Resized filesystem in /dev/vda9 May 8 00:03:58.239158 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:03:58.240171 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:03:58.240614 (ntainerd)[1505]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:03:58.241753 update_engine[1500]: I20250508 00:03:58.241678 1500 update_check_scheduler.cc:74] Next update check in 11m24s May 8 00:03:58.246211 jq[1504]: true May 8 00:03:58.264143 tar[1503]: linux-amd64/helm May 8 00:03:58.271928 systemd-logind[1497]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:03:58.271958 systemd-logind[1497]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:03:58.273959 systemd-logind[1497]: New seat seat0. May 8 00:03:58.283270 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:03:58.285578 systemd[1]: Started update-engine.service - Update Engine. May 8 00:03:58.288418 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:03:58.288588 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:03:58.290739 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:03:58.290903 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:03:58.293876 bash[1532]: Updated "/home/core/.ssh/authorized_keys" May 8 00:03:58.301928 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:03:58.308430 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:03:58.311453 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:03:58.339393 locksmithd[1533]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:03:58.358280 sshd_keygen[1498]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:03:58.386055 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:03:58.395111 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:03:58.403987 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:03:58.404365 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:03:58.413289 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:03:58.426191 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:03:58.440024 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:03:58.442456 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:03:58.443813 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:03:58.464853 containerd[1505]: time="2025-05-08T00:03:58.464730531Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 8 00:03:58.488011 containerd[1505]: time="2025-05-08T00:03:58.487972436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:03:58.490410 containerd[1505]: time="2025-05-08T00:03:58.490362047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:03:58.490410 containerd[1505]: time="2025-05-08T00:03:58.490401966Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:03:58.490488 containerd[1505]: time="2025-05-08T00:03:58.490426707Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:03:58.490642 containerd[1505]: time="2025-05-08T00:03:58.490614301Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:03:58.490667 containerd[1505]: time="2025-05-08T00:03:58.490646688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:03:58.490758 containerd[1505]: time="2025-05-08T00:03:58.490738849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:03:58.490783 containerd[1505]: time="2025-05-08T00:03:58.490756288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:03:58.491042 containerd[1505]: time="2025-05-08T00:03:58.491013229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:03:58.491042 containerd[1505]: time="2025-05-08T00:03:58.491033709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:03:58.491084 containerd[1505]: time="2025-05-08T00:03:58.491049147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:03:58.491084 containerd[1505]: time="2025-05-08T00:03:58.491059898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:03:58.491182 containerd[1505]: time="2025-05-08T00:03:58.491158122Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:03:58.491435 containerd[1505]: time="2025-05-08T00:03:58.491408959Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:03:58.491605 containerd[1505]: time="2025-05-08T00:03:58.491580115Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:03:58.491605 containerd[1505]: time="2025-05-08T00:03:58.491597355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:03:58.491738 containerd[1505]: time="2025-05-08T00:03:58.491704266Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:03:58.491810 containerd[1505]: time="2025-05-08T00:03:58.491787198Z" level=info msg="metadata content store policy set" policy=shared May 8 00:03:58.499835 containerd[1505]: time="2025-05-08T00:03:58.499794182Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:03:58.499835 containerd[1505]: time="2025-05-08T00:03:58.499841069Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:03:58.499835 containerd[1505]: time="2025-05-08T00:03:58.499857768Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:03:58.500044 containerd[1505]: time="2025-05-08T00:03:58.499874488Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:03:58.500044 containerd[1505]: time="2025-05-08T00:03:58.499889988Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:03:58.500150 containerd[1505]: time="2025-05-08T00:03:58.500064176Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:03:58.500470 containerd[1505]: time="2025-05-08T00:03:58.500304845Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:03:58.500470 containerd[1505]: time="2025-05-08T00:03:58.500425862Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:03:58.500470 containerd[1505]: time="2025-05-08T00:03:58.500440040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:03:58.500470 containerd[1505]: time="2025-05-08T00:03:58.500454510Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:03:58.500470 containerd[1505]: time="2025-05-08T00:03:58.500468104Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:03:58.500584 containerd[1505]: time="2025-05-08T00:03:58.500482604Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:03:58.500584 containerd[1505]: time="2025-05-08T00:03:58.500496064Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:03:58.500584 containerd[1505]: time="2025-05-08T00:03:58.500515690Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:03:58.500584 containerd[1505]: time="2025-05-08T00:03:58.500532003Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:03:58.500584 containerd[1505]: time="2025-05-08T00:03:58.500545337Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:03:58.500584 containerd[1505]: time="2025-05-08T00:03:58.500557661Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:03:58.500584 containerd[1505]: time="2025-05-08T00:03:58.500568766Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:03:58.500584 containerd[1505]: time="2025-05-08T00:03:58.500588788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:03:58.500753 containerd[1505]: time="2025-05-08T00:03:58.500603486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:03:58.500753 containerd[1505]: time="2025-05-08T00:03:58.500616883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:03:58.500753 containerd[1505]: time="2025-05-08T00:03:58.500628998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:03:58.500753 containerd[1505]: time="2025-05-08T00:03:58.500641144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:03:58.500753 containerd[1505]: time="2025-05-08T00:03:58.500654239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:03:58.500753 containerd[1505]: time="2025-05-08T00:03:58.500666052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:03:58.500753 containerd[1505]: time="2025-05-08T00:03:58.500678896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:03:58.500753 containerd[1505]: time="2025-05-08T00:03:58.500692585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:03:58.500753 containerd[1505]: time="2025-05-08T00:03:58.500708304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:03:58.500753 containerd[1505]: time="2025-05-08T00:03:58.500740931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:03:58.500753 containerd[1505]: time="2025-05-08T00:03:58.500752108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:03:58.500969 containerd[1505]: time="2025-05-08T00:03:58.500764901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:03:58.500969 containerd[1505]: time="2025-05-08T00:03:58.500778725Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:03:58.500969 containerd[1505]: time="2025-05-08T00:03:58.500796924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:03:58.500969 containerd[1505]: time="2025-05-08T00:03:58.500808852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:03:58.500969 containerd[1505]: time="2025-05-08T00:03:58.500831311Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:03:58.500969 containerd[1505]: time="2025-05-08T00:03:58.500884773Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:03:58.500969 containerd[1505]: time="2025-05-08T00:03:58.500900513Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:03:58.500969 containerd[1505]: time="2025-05-08T00:03:58.500910607Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:03:58.500969 containerd[1505]: time="2025-05-08T00:03:58.500922118Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:03:58.500969 containerd[1505]: time="2025-05-08T00:03:58.500932463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:03:58.500969 containerd[1505]: time="2025-05-08T00:03:58.500949558Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:03:58.500969 containerd[1505]: time="2025-05-08T00:03:58.500961610Z" level=info msg="NRI interface is disabled by configuration." May 8 00:03:58.500969 containerd[1505]: time="2025-05-08T00:03:58.500976444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:03:58.501288 containerd[1505]: time="2025-05-08T00:03:58.501233083Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:03:58.501288 containerd[1505]: time="2025-05-08T00:03:58.501286388Z" level=info msg="Connect containerd service" May 8 00:03:58.501472 containerd[1505]: time="2025-05-08T00:03:58.501322359Z" level=info msg="using legacy CRI server" May 8 00:03:58.501472 containerd[1505]: time="2025-05-08T00:03:58.501330984Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:03:58.501472 containerd[1505]: time="2025-05-08T00:03:58.501417875Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:03:58.502067 containerd[1505]: time="2025-05-08T00:03:58.502038680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:03:58.502408 containerd[1505]: time="2025-05-08T00:03:58.502335446Z" level=info msg="Start subscribing containerd event" May 8 00:03:58.502437 containerd[1505]: time="2025-05-08T00:03:58.502428368Z" level=info msg="Start recovering state" May 8 00:03:58.502464 containerd[1505]: time="2025-05-08T00:03:58.502448369Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:03:58.502653 containerd[1505]: time="2025-05-08T00:03:58.502525426Z" level=info msg="Start event monitor" May 8 00:03:58.502653 containerd[1505]: time="2025-05-08T00:03:58.502556750Z" level=info msg="Start snapshots syncer" May 8 00:03:58.502653 containerd[1505]: time="2025-05-08T00:03:58.502567418Z" level=info msg="Start cni network conf syncer for default" May 8 00:03:58.502653 containerd[1505]: time="2025-05-08T00:03:58.502576397Z" level=info msg="Start streaming server" May 8 00:03:58.502760 containerd[1505]: time="2025-05-08T00:03:58.502658579Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:03:58.505596 containerd[1505]: time="2025-05-08T00:03:58.504266552Z" level=info msg="containerd successfully booted in 0.040887s" May 8 00:03:58.505237 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:03:58.666879 tar[1503]: linux-amd64/LICENSE May 8 00:03:58.666879 tar[1503]: linux-amd64/README.md May 8 00:03:58.688615 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:03:58.938750 systemd-networkd[1438]: eth0: Gained IPv6LL May 8 00:03:58.942168 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:03:58.944117 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:03:58.963010 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 00:03:58.965679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:03:58.968368 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:03:58.988871 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:03:58.989197 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 00:03:58.990926 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:03:58.995802 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:03:59.618686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:03:59.620736 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:03:59.622801 systemd[1]: Startup finished in 926ms (kernel) + 5.992s (initrd) + 3.963s (userspace) = 10.882s. May 8 00:03:59.625064 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:04:00.049890 kubelet[1590]: E0508 00:04:00.049734 1590 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:04:00.055039 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:04:00.055286 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:04:00.055692 systemd[1]: kubelet.service: Consumed 929ms CPU time, 236.8M memory peak. May 8 00:04:03.113146 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:04:03.114616 systemd[1]: Started sshd@0-10.0.0.54:22-10.0.0.1:53088.service - OpenSSH per-connection server daemon (10.0.0.1:53088). May 8 00:04:03.167544 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 53088 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:04:03.169797 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:04:03.181084 systemd-logind[1497]: New session 1 of user core. May 8 00:04:03.182747 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:04:03.189991 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:04:03.204108 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:04:03.216140 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:04:03.219619 (systemd)[1608]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:04:03.222004 systemd-logind[1497]: New session c1 of user core. May 8 00:04:03.375683 systemd[1608]: Queued start job for default target default.target. May 8 00:04:03.387271 systemd[1608]: Created slice app.slice - User Application Slice. May 8 00:04:03.387300 systemd[1608]: Reached target paths.target - Paths. May 8 00:04:03.387348 systemd[1608]: Reached target timers.target - Timers. May 8 00:04:03.389223 systemd[1608]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:04:03.401534 systemd[1608]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:04:03.401680 systemd[1608]: Reached target sockets.target - Sockets. May 8 00:04:03.401753 systemd[1608]: Reached target basic.target - Basic System. May 8 00:04:03.401801 systemd[1608]: Reached target default.target - Main User Target. May 8 00:04:03.401835 systemd[1608]: Startup finished in 171ms. May 8 00:04:03.402251 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:04:03.404102 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:04:03.474061 systemd[1]: Started sshd@1-10.0.0.54:22-10.0.0.1:53102.service - OpenSSH per-connection server daemon (10.0.0.1:53102). May 8 00:04:03.509126 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 53102 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:04:03.510911 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:04:03.515284 systemd-logind[1497]: New session 2 of user core. May 8 00:04:03.524956 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:04:03.579259 sshd[1621]: Connection closed by 10.0.0.1 port 53102 May 8 00:04:03.579655 sshd-session[1619]: pam_unix(sshd:session): session closed for user core May 8 00:04:03.591887 systemd[1]: sshd@1-10.0.0.54:22-10.0.0.1:53102.service: Deactivated successfully. May 8 00:04:03.593958 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:04:03.595524 systemd-logind[1497]: Session 2 logged out. Waiting for processes to exit. May 8 00:04:03.608048 systemd[1]: Started sshd@2-10.0.0.54:22-10.0.0.1:53118.service - OpenSSH per-connection server daemon (10.0.0.1:53118). May 8 00:04:03.609252 systemd-logind[1497]: Removed session 2. May 8 00:04:03.644131 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 53118 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:04:03.645593 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:04:03.649868 systemd-logind[1497]: New session 3 of user core. May 8 00:04:03.659829 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:04:03.711198 sshd[1629]: Connection closed by 10.0.0.1 port 53118 May 8 00:04:03.711572 sshd-session[1626]: pam_unix(sshd:session): session closed for user core May 8 00:04:03.732862 systemd[1]: sshd@2-10.0.0.54:22-10.0.0.1:53118.service: Deactivated successfully. May 8 00:04:03.735212 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:04:03.737088 systemd-logind[1497]: Session 3 logged out. Waiting for processes to exit. May 8 00:04:03.752130 systemd[1]: Started sshd@3-10.0.0.54:22-10.0.0.1:53130.service - OpenSSH per-connection server daemon (10.0.0.1:53130). May 8 00:04:03.753222 systemd-logind[1497]: Removed session 3. May 8 00:04:03.791295 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 53130 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:04:03.792984 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:04:03.797486 systemd-logind[1497]: New session 4 of user core. May 8 00:04:03.806829 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:04:03.860725 sshd[1637]: Connection closed by 10.0.0.1 port 53130 May 8 00:04:03.861072 sshd-session[1634]: pam_unix(sshd:session): session closed for user core May 8 00:04:03.872240 systemd[1]: sshd@3-10.0.0.54:22-10.0.0.1:53130.service: Deactivated successfully. May 8 00:04:03.873992 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:04:03.875384 systemd-logind[1497]: Session 4 logged out. Waiting for processes to exit. May 8 00:04:03.883918 systemd[1]: Started sshd@4-10.0.0.54:22-10.0.0.1:53136.service - OpenSSH per-connection server daemon (10.0.0.1:53136). May 8 00:04:03.884750 systemd-logind[1497]: Removed session 4. May 8 00:04:03.919092 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 53136 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:04:03.920689 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:04:03.925398 systemd-logind[1497]: New session 5 of user core. May 8 00:04:03.935847 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:04:03.996466 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:04:03.996961 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:04:04.020069 sudo[1646]: pam_unix(sudo:session): session closed for user root May 8 00:04:04.022349 sshd[1645]: Connection closed by 10.0.0.1 port 53136 May 8 00:04:04.023076 sshd-session[1642]: pam_unix(sshd:session): session closed for user core May 8 00:04:04.036954 systemd[1]: sshd@4-10.0.0.54:22-10.0.0.1:53136.service: Deactivated successfully. May 8 00:04:04.039821 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:04:04.040854 systemd-logind[1497]: Session 5 logged out. Waiting for processes to exit. May 8 00:04:04.054264 systemd[1]: Started sshd@5-10.0.0.54:22-10.0.0.1:53138.service - OpenSSH per-connection server daemon (10.0.0.1:53138). May 8 00:04:04.055596 systemd-logind[1497]: Removed session 5. May 8 00:04:04.090069 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 53138 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:04:04.091975 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:04:04.096683 systemd-logind[1497]: New session 6 of user core. May 8 00:04:04.107874 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:04:04.167269 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:04:04.167756 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:04:04.172471 sudo[1656]: pam_unix(sudo:session): session closed for user root May 8 00:04:04.180957 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 8 00:04:04.181385 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:04:04.203078 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:04:04.241416 augenrules[1678]: No rules May 8 00:04:04.243821 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:04:04.244243 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:04:04.245629 sudo[1655]: pam_unix(sudo:session): session closed for user root May 8 00:04:04.247646 sshd[1654]: Connection closed by 10.0.0.1 port 53138 May 8 00:04:04.248075 sshd-session[1651]: pam_unix(sshd:session): session closed for user core May 8 00:04:04.259932 systemd[1]: sshd@5-10.0.0.54:22-10.0.0.1:53138.service: Deactivated successfully. May 8 00:04:04.262926 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:04:04.265372 systemd-logind[1497]: Session 6 logged out. Waiting for processes to exit. May 8 00:04:04.275231 systemd[1]: Started sshd@6-10.0.0.54:22-10.0.0.1:53140.service - OpenSSH per-connection server daemon (10.0.0.1:53140). May 8 00:04:04.276758 systemd-logind[1497]: Removed session 6. May 8 00:04:04.318056 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 53140 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:04:04.320103 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:04:04.325785 systemd-logind[1497]: New session 7 of user core. May 8 00:04:04.336872 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:04:04.391977 sudo[1690]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:04:04.392324 sudo[1690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:04:04.908964 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:04:04.909161 (dockerd)[1711]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:04:05.475462 dockerd[1711]: time="2025-05-08T00:04:05.475310213Z" level=info msg="Starting up" May 8 00:04:06.139868 dockerd[1711]: time="2025-05-08T00:04:06.139801261Z" level=info msg="Loading containers: start." May 8 00:04:06.348732 kernel: Initializing XFRM netlink socket May 8 00:04:06.439071 systemd-networkd[1438]: docker0: Link UP May 8 00:04:06.477428 dockerd[1711]: time="2025-05-08T00:04:06.477366982Z" level=info msg="Loading containers: done." May 8 00:04:06.502225 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck13040150-merged.mount: Deactivated successfully. May 8 00:04:06.502786 dockerd[1711]: time="2025-05-08T00:04:06.502306548Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:04:06.502786 dockerd[1711]: time="2025-05-08T00:04:06.502406440Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 8 00:04:06.502786 dockerd[1711]: time="2025-05-08T00:04:06.502524347Z" level=info msg="Daemon has completed initialization" May 8 00:04:06.544015 dockerd[1711]: time="2025-05-08T00:04:06.543931027Z" level=info msg="API listen on /run/docker.sock" May 8 00:04:06.544129 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:04:07.389704 containerd[1505]: time="2025-05-08T00:04:07.389564435Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 8 00:04:08.073095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2876586077.mount: Deactivated successfully. May 8 00:04:09.431742 containerd[1505]: time="2025-05-08T00:04:09.431620302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:09.434969 containerd[1505]: time="2025-05-08T00:04:09.434916954Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 8 00:04:09.435810 containerd[1505]: time="2025-05-08T00:04:09.435744581Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:09.439433 containerd[1505]: time="2025-05-08T00:04:09.439377216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:09.440396 containerd[1505]: time="2025-05-08T00:04:09.440339777Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.050697083s" May 8 00:04:09.440396 containerd[1505]: time="2025-05-08T00:04:09.440376771Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 8 00:04:09.443378 containerd[1505]: time="2025-05-08T00:04:09.443340708Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 8 00:04:10.307037 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:04:10.313859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:04:10.510487 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:04:10.515119 (kubelet)[1969]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:04:10.830352 kubelet[1969]: E0508 00:04:10.830274 1969 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:04:10.837379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:04:10.837657 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:04:10.838150 systemd[1]: kubelet.service: Consumed 228ms CPU time, 98.2M memory peak. May 8 00:04:11.248263 containerd[1505]: time="2025-05-08T00:04:11.248111796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:11.249160 containerd[1505]: time="2025-05-08T00:04:11.249077617Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 8 00:04:11.250099 containerd[1505]: time="2025-05-08T00:04:11.250067672Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:11.252783 containerd[1505]: time="2025-05-08T00:04:11.252739263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:11.253857 containerd[1505]: time="2025-05-08T00:04:11.253822358Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.810446299s" May 8 00:04:11.253857 containerd[1505]: time="2025-05-08T00:04:11.253857539Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 8 00:04:11.254359 containerd[1505]: time="2025-05-08T00:04:11.254327909Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 8 00:04:13.279865 containerd[1505]: time="2025-05-08T00:04:13.279767109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:13.282078 containerd[1505]: time="2025-05-08T00:04:13.281928337Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 8 00:04:13.283940 containerd[1505]: time="2025-05-08T00:04:13.283799344Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:13.296098 containerd[1505]: time="2025-05-08T00:04:13.295983720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:13.300649 containerd[1505]: time="2025-05-08T00:04:13.298185912Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 2.043826077s" May 8 00:04:13.300649 containerd[1505]: time="2025-05-08T00:04:13.298227864Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 8 00:04:13.300649 containerd[1505]: time="2025-05-08T00:04:13.299041686Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 8 00:04:15.301940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2848004218.mount: Deactivated successfully. May 8 00:04:16.735808 containerd[1505]: time="2025-05-08T00:04:16.735723969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:16.768619 containerd[1505]: time="2025-05-08T00:04:16.768502137Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 8 00:04:16.794919 containerd[1505]: time="2025-05-08T00:04:16.794856329Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:16.822712 containerd[1505]: time="2025-05-08T00:04:16.822556950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:16.823812 containerd[1505]: time="2025-05-08T00:04:16.823746946Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 3.524660487s" May 8 00:04:16.823887 containerd[1505]: time="2025-05-08T00:04:16.823814263Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 8 00:04:16.825812 containerd[1505]: time="2025-05-08T00:04:16.825776036Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:04:19.096584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1854600654.mount: Deactivated successfully. May 8 00:04:20.692921 containerd[1505]: time="2025-05-08T00:04:20.692836475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:20.713250 containerd[1505]: time="2025-05-08T00:04:20.713198269Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 8 00:04:20.730992 containerd[1505]: time="2025-05-08T00:04:20.730920784Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:20.751690 containerd[1505]: time="2025-05-08T00:04:20.751619720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:20.752764 containerd[1505]: time="2025-05-08T00:04:20.752717151Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.926807298s" May 8 00:04:20.752764 containerd[1505]: time="2025-05-08T00:04:20.752768896Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 8 00:04:20.753640 containerd[1505]: time="2025-05-08T00:04:20.753443327Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 00:04:20.850376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:04:20.861920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:04:21.014850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:04:21.020255 (kubelet)[2047]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:04:21.456830 kubelet[2047]: E0508 00:04:21.456662 2047 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:04:21.461095 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:04:21.461338 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:04:21.461761 systemd[1]: kubelet.service: Consumed 307ms CPU time, 96.7M memory peak. May 8 00:04:22.127515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1355714500.mount: Deactivated successfully. May 8 00:04:22.133179 containerd[1505]: time="2025-05-08T00:04:22.133131221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:22.134009 containerd[1505]: time="2025-05-08T00:04:22.133957449Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 8 00:04:22.135039 containerd[1505]: time="2025-05-08T00:04:22.135006091Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:22.137306 containerd[1505]: time="2025-05-08T00:04:22.137274522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:22.137992 containerd[1505]: time="2025-05-08T00:04:22.137960833Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.384487962s" May 8 00:04:22.137992 containerd[1505]: time="2025-05-08T00:04:22.137990185Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 8 00:04:22.138454 containerd[1505]: time="2025-05-08T00:04:22.138412868Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 8 00:04:23.800625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount34164527.mount: Deactivated successfully. May 8 00:04:26.707416 containerd[1505]: time="2025-05-08T00:04:26.707347834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:26.708160 containerd[1505]: time="2025-05-08T00:04:26.708078292Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 8 00:04:26.709389 containerd[1505]: time="2025-05-08T00:04:26.709354259Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:26.712481 containerd[1505]: time="2025-05-08T00:04:26.712423509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:26.713668 containerd[1505]: time="2025-05-08T00:04:26.713635134Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.575183074s" May 8 00:04:26.713668 containerd[1505]: time="2025-05-08T00:04:26.713665229Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 8 00:04:28.819845 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:04:28.820018 systemd[1]: kubelet.service: Consumed 307ms CPU time, 96.7M memory peak. May 8 00:04:28.831944 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:04:28.865897 systemd[1]: Reload requested from client PID 2143 ('systemctl') (unit session-7.scope)... May 8 00:04:28.865923 systemd[1]: Reloading... May 8 00:04:28.978190 zram_generator::config[2187]: No configuration found. May 8 00:04:29.331867 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:04:29.447016 systemd[1]: Reloading finished in 580 ms. May 8 00:04:29.505441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:04:29.511410 (kubelet)[2225]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:04:29.516348 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:04:29.518103 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:04:29.518490 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:04:29.518589 systemd[1]: kubelet.service: Consumed 152ms CPU time, 87.4M memory peak. May 8 00:04:29.539106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:04:29.700776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:04:29.706531 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:04:29.745203 kubelet[2242]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:04:29.745203 kubelet[2242]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:04:29.745203 kubelet[2242]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:04:29.745742 kubelet[2242]: I0508 00:04:29.745455 2242 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:04:30.315199 kubelet[2242]: I0508 00:04:30.315144 2242 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 8 00:04:30.315199 kubelet[2242]: I0508 00:04:30.315183 2242 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:04:30.315485 kubelet[2242]: I0508 00:04:30.315466 2242 server.go:929] "Client rotation is on, will bootstrap in background" May 8 00:04:30.338598 kubelet[2242]: I0508 00:04:30.338555 2242 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:04:30.340045 kubelet[2242]: E0508 00:04:30.339990 2242 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 8 00:04:30.345593 kubelet[2242]: E0508 00:04:30.345562 2242 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:04:30.345593 kubelet[2242]: I0508 00:04:30.345591 2242 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:04:30.352017 kubelet[2242]: I0508 00:04:30.351978 2242 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:04:30.352888 kubelet[2242]: I0508 00:04:30.352859 2242 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 8 00:04:30.353071 kubelet[2242]: I0508 00:04:30.353027 2242 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:04:30.353228 kubelet[2242]: I0508 00:04:30.353060 2242 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:04:30.353228 kubelet[2242]: I0508 00:04:30.353225 2242 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:04:30.353362 kubelet[2242]: I0508 00:04:30.353234 2242 container_manager_linux.go:300] "Creating device plugin manager" May 8 00:04:30.353362 kubelet[2242]: I0508 00:04:30.353354 2242 state_mem.go:36] "Initialized new in-memory state store" May 8 00:04:30.354756 kubelet[2242]: I0508 00:04:30.354710 2242 kubelet.go:408] "Attempting to sync node with API server" May 8 00:04:30.354756 kubelet[2242]: I0508 00:04:30.354760 2242 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:04:30.354896 kubelet[2242]: I0508 00:04:30.354814 2242 kubelet.go:314] "Adding apiserver pod source" May 8 00:04:30.354896 kubelet[2242]: I0508 00:04:30.354834 2242 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:04:30.359973 kubelet[2242]: I0508 00:04:30.359000 2242 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:04:30.359973 kubelet[2242]: W0508 00:04:30.359586 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 8 00:04:30.359973 kubelet[2242]: E0508 00:04:30.359644 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 8 00:04:30.360305 kubelet[2242]: W0508 00:04:30.360257 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 8 00:04:30.360347 kubelet[2242]: E0508 00:04:30.360305 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 8 00:04:30.361188 kubelet[2242]: I0508 00:04:30.361153 2242 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:04:30.361753 kubelet[2242]: W0508 00:04:30.361722 2242 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:04:30.362567 kubelet[2242]: I0508 00:04:30.362455 2242 server.go:1269] "Started kubelet" May 8 00:04:30.363171 kubelet[2242]: I0508 00:04:30.362800 2242 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:04:30.363171 kubelet[2242]: I0508 00:04:30.363166 2242 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:04:30.363240 kubelet[2242]: I0508 00:04:30.363218 2242 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:04:30.364109 kubelet[2242]: I0508 00:04:30.363841 2242 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:04:30.364733 kubelet[2242]: I0508 00:04:30.364165 2242 server.go:460] "Adding debug handlers to kubelet server" May 8 00:04:30.367245 kubelet[2242]: I0508 00:04:30.367198 2242 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:04:30.370102 kubelet[2242]: I0508 00:04:30.370082 2242 volume_manager.go:289] "Starting Kubelet Volume Manager" May 8 00:04:30.370318 kubelet[2242]: E0508 00:04:30.370298 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:30.370374 kubelet[2242]: I0508 00:04:30.370357 2242 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 8 00:04:30.370447 kubelet[2242]: I0508 00:04:30.370434 2242 reconciler.go:26] "Reconciler: start to sync state" May 8 00:04:30.370803 kubelet[2242]: W0508 00:04:30.370719 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 8 00:04:30.370803 kubelet[2242]: E0508 00:04:30.370765 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 8 00:04:30.370882 kubelet[2242]: E0508 00:04:30.370810 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="200ms" May 8 00:04:30.371931 kubelet[2242]: I0508 00:04:30.371910 2242 factory.go:221] Registration of the systemd container factory successfully May 8 00:04:30.372130 kubelet[2242]: E0508 00:04:30.371972 2242 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:04:30.372748 kubelet[2242]: I0508 00:04:30.372110 2242 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:04:30.373476 kubelet[2242]: E0508 00:04:30.371432 2242 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.54:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.54:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d6471c963363c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:04:30.362424892 +0000 UTC m=+0.651134001,LastTimestamp:2025-05-08 00:04:30.362424892 +0000 UTC m=+0.651134001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:04:30.373647 kubelet[2242]: I0508 00:04:30.373509 2242 factory.go:221] Registration of the containerd container factory successfully May 8 00:04:30.384584 kubelet[2242]: I0508 00:04:30.384411 2242 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:04:30.385935 kubelet[2242]: I0508 00:04:30.385907 2242 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:04:30.386011 kubelet[2242]: I0508 00:04:30.385944 2242 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:04:30.386011 kubelet[2242]: I0508 00:04:30.385966 2242 kubelet.go:2321] "Starting kubelet main sync loop" May 8 00:04:30.386064 kubelet[2242]: E0508 00:04:30.386007 2242 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:04:30.390648 kubelet[2242]: W0508 00:04:30.390615 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 8 00:04:30.390736 kubelet[2242]: E0508 00:04:30.390650 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 8 00:04:30.390884 kubelet[2242]: I0508 00:04:30.390847 2242 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:04:30.390884 kubelet[2242]: I0508 00:04:30.390863 2242 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:04:30.390884 kubelet[2242]: I0508 00:04:30.390880 2242 state_mem.go:36] "Initialized new in-memory state store" May 8 00:04:30.471443 kubelet[2242]: E0508 00:04:30.471386 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:30.486782 kubelet[2242]: E0508 00:04:30.486737 2242 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:04:30.571780 kubelet[2242]: E0508 00:04:30.571608 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:30.571865 kubelet[2242]: E0508 00:04:30.571809 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="400ms" May 8 00:04:30.672329 kubelet[2242]: E0508 00:04:30.672279 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:30.687475 kubelet[2242]: E0508 00:04:30.687414 2242 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:04:30.745929 kubelet[2242]: I0508 00:04:30.745873 2242 policy_none.go:49] "None policy: Start" May 8 00:04:30.746600 kubelet[2242]: I0508 00:04:30.746568 2242 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:04:30.746600 kubelet[2242]: I0508 00:04:30.746593 2242 state_mem.go:35] "Initializing new in-memory state store" May 8 00:04:30.759433 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:04:30.773056 kubelet[2242]: E0508 00:04:30.773021 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:30.773865 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:04:30.777068 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:04:30.789232 kubelet[2242]: I0508 00:04:30.788599 2242 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:04:30.789232 kubelet[2242]: I0508 00:04:30.788870 2242 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:04:30.789232 kubelet[2242]: I0508 00:04:30.788882 2242 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:04:30.789232 kubelet[2242]: I0508 00:04:30.789134 2242 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:04:30.790486 kubelet[2242]: E0508 00:04:30.790448 2242 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:04:30.890944 kubelet[2242]: I0508 00:04:30.890908 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:04:30.891412 kubelet[2242]: E0508 00:04:30.891339 2242 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" May 8 00:04:30.973281 kubelet[2242]: E0508 00:04:30.973215 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="800ms" May 8 00:04:31.092070 kubelet[2242]: I0508 00:04:31.092043 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:04:31.092332 kubelet[2242]: E0508 00:04:31.092308 2242 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" May 8 00:04:31.095533 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 8 00:04:31.116002 systemd[1]: Created slice kubepods-burstable-pod4e110e6cd779cb0bea97f3f5a72b9687.slice - libcontainer container kubepods-burstable-pod4e110e6cd779cb0bea97f3f5a72b9687.slice. May 8 00:04:31.129902 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 8 00:04:31.174810 kubelet[2242]: I0508 00:04:31.174715 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4e110e6cd779cb0bea97f3f5a72b9687-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4e110e6cd779cb0bea97f3f5a72b9687\") " pod="kube-system/kube-apiserver-localhost" May 8 00:04:31.174810 kubelet[2242]: I0508 00:04:31.174746 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4e110e6cd779cb0bea97f3f5a72b9687-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4e110e6cd779cb0bea97f3f5a72b9687\") " pod="kube-system/kube-apiserver-localhost" May 8 00:04:31.174810 kubelet[2242]: I0508 00:04:31.174762 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:04:31.174810 kubelet[2242]: I0508 00:04:31.174779 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:04:31.175066 kubelet[2242]: I0508 00:04:31.174794 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 8 00:04:31.175101 kubelet[2242]: I0508 00:04:31.175076 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4e110e6cd779cb0bea97f3f5a72b9687-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4e110e6cd779cb0bea97f3f5a72b9687\") " pod="kube-system/kube-apiserver-localhost" May 8 00:04:31.175101 kubelet[2242]: I0508 00:04:31.175091 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:04:31.175141 kubelet[2242]: I0508 00:04:31.175103 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:04:31.175141 kubelet[2242]: I0508 00:04:31.175117 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:04:31.343071 kubelet[2242]: W0508 00:04:31.343014 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 8 00:04:31.343071 kubelet[2242]: E0508 00:04:31.343073 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 8 00:04:31.414997 containerd[1505]: time="2025-05-08T00:04:31.414934891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 8 00:04:31.428972 containerd[1505]: time="2025-05-08T00:04:31.428831084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4e110e6cd779cb0bea97f3f5a72b9687,Namespace:kube-system,Attempt:0,}" May 8 00:04:31.433023 containerd[1505]: time="2025-05-08T00:04:31.432994880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 8 00:04:31.494083 kubelet[2242]: I0508 00:04:31.494035 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:04:31.494518 kubelet[2242]: E0508 00:04:31.494467 2242 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" May 8 00:04:31.567993 kubelet[2242]: W0508 00:04:31.567943 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 8 00:04:31.568067 kubelet[2242]: E0508 00:04:31.567990 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 8 00:04:31.705714 kubelet[2242]: W0508 00:04:31.705506 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 8 00:04:31.705714 kubelet[2242]: E0508 00:04:31.705574 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 8 00:04:31.755283 kubelet[2242]: W0508 00:04:31.755217 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 8 00:04:31.755283 kubelet[2242]: E0508 00:04:31.755274 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 8 00:04:31.774214 kubelet[2242]: E0508 00:04:31.774153 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="1.6s" May 8 00:04:31.841492 kubelet[2242]: E0508 00:04:31.841359 2242 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.54:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.54:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d6471c963363c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:04:30.362424892 +0000 UTC m=+0.651134001,LastTimestamp:2025-05-08 00:04:30.362424892 +0000 UTC m=+0.651134001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:04:32.296670 kubelet[2242]: I0508 00:04:32.296618 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:04:32.297073 kubelet[2242]: E0508 00:04:32.297041 2242 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" May 8 00:04:32.529655 kubelet[2242]: E0508 00:04:32.529597 2242 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 8 00:04:32.877785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1988784086.mount: Deactivated successfully. May 8 00:04:32.884397 containerd[1505]: time="2025-05-08T00:04:32.884336579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:04:32.886326 containerd[1505]: time="2025-05-08T00:04:32.886269970Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 00:04:32.890263 containerd[1505]: time="2025-05-08T00:04:32.890207608Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:04:32.891850 containerd[1505]: time="2025-05-08T00:04:32.891782096Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:04:32.892668 containerd[1505]: time="2025-05-08T00:04:32.892630014Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:04:32.896349 containerd[1505]: time="2025-05-08T00:04:32.896294384Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:04:32.923872 containerd[1505]: time="2025-05-08T00:04:32.923772424Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:04:33.015355 containerd[1505]: time="2025-05-08T00:04:33.015272327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:04:33.017684 containerd[1505]: time="2025-05-08T00:04:33.017651942Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.584585983s" May 8 00:04:33.018248 containerd[1505]: time="2025-05-08T00:04:33.018200932Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.589293766s" May 8 00:04:33.037834 kubelet[2242]: W0508 00:04:33.037741 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 8 00:04:33.037834 kubelet[2242]: E0508 00:04:33.037822 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 8 00:04:33.096704 containerd[1505]: time="2025-05-08T00:04:33.096640482Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.681575054s" May 8 00:04:33.217545 kubelet[2242]: W0508 00:04:33.217372 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 8 00:04:33.217545 kubelet[2242]: E0508 00:04:33.217450 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 8 00:04:33.375378 kubelet[2242]: E0508 00:04:33.375314 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="3.2s" May 8 00:04:33.895148 containerd[1505]: time="2025-05-08T00:04:33.894197353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:04:33.895148 containerd[1505]: time="2025-05-08T00:04:33.895088177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:04:33.895148 containerd[1505]: time="2025-05-08T00:04:33.895101030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:04:33.895617 containerd[1505]: time="2025-05-08T00:04:33.895191664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:04:33.900742 kubelet[2242]: I0508 00:04:33.900716 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:04:33.901121 kubelet[2242]: E0508 00:04:33.901089 2242 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" May 8 00:04:33.973869 systemd[1]: Started cri-containerd-24328b23016f090461f0896bd78bbcf0891aa40fb68bd4424840855f03503b36.scope - libcontainer container 24328b23016f090461f0896bd78bbcf0891aa40fb68bd4424840855f03503b36. May 8 00:04:34.008644 containerd[1505]: time="2025-05-08T00:04:34.008576412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"24328b23016f090461f0896bd78bbcf0891aa40fb68bd4424840855f03503b36\"" May 8 00:04:34.011245 containerd[1505]: time="2025-05-08T00:04:34.011205801Z" level=info msg="CreateContainer within sandbox \"24328b23016f090461f0896bd78bbcf0891aa40fb68bd4424840855f03503b36\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:04:34.104185 containerd[1505]: time="2025-05-08T00:04:34.104089440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:04:34.104185 containerd[1505]: time="2025-05-08T00:04:34.104158455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:04:34.104185 containerd[1505]: time="2025-05-08T00:04:34.104170978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:04:34.104361 containerd[1505]: time="2025-05-08T00:04:34.104256615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:04:34.122845 systemd[1]: Started cri-containerd-4023147f23ed0e04677cdb63b2b7fd58fedf2cf10f7b7a7424e39b741d4c95bb.scope - libcontainer container 4023147f23ed0e04677cdb63b2b7fd58fedf2cf10f7b7a7424e39b741d4c95bb. May 8 00:04:34.163582 kubelet[2242]: W0508 00:04:34.163427 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 8 00:04:34.163582 kubelet[2242]: E0508 00:04:34.163490 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 8 00:04:34.167384 containerd[1505]: time="2025-05-08T00:04:34.167331022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4e110e6cd779cb0bea97f3f5a72b9687,Namespace:kube-system,Attempt:0,} returns sandbox id \"4023147f23ed0e04677cdb63b2b7fd58fedf2cf10f7b7a7424e39b741d4c95bb\"" May 8 00:04:34.169631 containerd[1505]: time="2025-05-08T00:04:34.169594619Z" level=info msg="CreateContainer within sandbox \"4023147f23ed0e04677cdb63b2b7fd58fedf2cf10f7b7a7424e39b741d4c95bb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:04:34.196596 kubelet[2242]: W0508 00:04:34.196522 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 8 00:04:34.196656 kubelet[2242]: E0508 00:04:34.196601 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 8 00:04:34.249402 containerd[1505]: time="2025-05-08T00:04:34.249323133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:04:34.249402 containerd[1505]: time="2025-05-08T00:04:34.249372158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:04:34.249402 containerd[1505]: time="2025-05-08T00:04:34.249382825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:04:34.249716 containerd[1505]: time="2025-05-08T00:04:34.249628071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:04:34.288981 systemd[1]: Started cri-containerd-db7c9aa4c73974cdaeb84b5b5fcac89aa7648817a3e52d6f5a49d9c9de9495ed.scope - libcontainer container db7c9aa4c73974cdaeb84b5b5fcac89aa7648817a3e52d6f5a49d9c9de9495ed. May 8 00:04:34.325624 containerd[1505]: time="2025-05-08T00:04:34.325573578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"db7c9aa4c73974cdaeb84b5b5fcac89aa7648817a3e52d6f5a49d9c9de9495ed\"" May 8 00:04:34.328072 containerd[1505]: time="2025-05-08T00:04:34.328028011Z" level=info msg="CreateContainer within sandbox \"db7c9aa4c73974cdaeb84b5b5fcac89aa7648817a3e52d6f5a49d9c9de9495ed\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:04:35.137159 containerd[1505]: time="2025-05-08T00:04:35.137076692Z" level=info msg="CreateContainer within sandbox \"24328b23016f090461f0896bd78bbcf0891aa40fb68bd4424840855f03503b36\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"367e6194dcf29fb47bcc54cfbd2747aac07c9a9dc4be468aa26a6e1c893d71f6\"" May 8 00:04:35.138019 containerd[1505]: time="2025-05-08T00:04:35.137970797Z" level=info msg="StartContainer for \"367e6194dcf29fb47bcc54cfbd2747aac07c9a9dc4be468aa26a6e1c893d71f6\"" May 8 00:04:35.144333 containerd[1505]: time="2025-05-08T00:04:35.144252060Z" level=info msg="CreateContainer within sandbox \"4023147f23ed0e04677cdb63b2b7fd58fedf2cf10f7b7a7424e39b741d4c95bb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ee8b4b98f822c2df215036616711e9dd7c0a1b9ded6fa99b7ca384956a33dca0\"" May 8 00:04:35.144974 containerd[1505]: time="2025-05-08T00:04:35.144943797Z" level=info msg="StartContainer for \"ee8b4b98f822c2df215036616711e9dd7c0a1b9ded6fa99b7ca384956a33dca0\"" May 8 00:04:35.146504 containerd[1505]: time="2025-05-08T00:04:35.146470120Z" level=info msg="CreateContainer within sandbox \"db7c9aa4c73974cdaeb84b5b5fcac89aa7648817a3e52d6f5a49d9c9de9495ed\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3e6c75f9be3e6b353427d4217c3108562387b9d5bf44089673885c9ac16f0993\"" May 8 00:04:35.148026 containerd[1505]: time="2025-05-08T00:04:35.147982798Z" level=info msg="StartContainer for \"3e6c75f9be3e6b353427d4217c3108562387b9d5bf44089673885c9ac16f0993\"" May 8 00:04:35.171835 systemd[1]: Started cri-containerd-367e6194dcf29fb47bcc54cfbd2747aac07c9a9dc4be468aa26a6e1c893d71f6.scope - libcontainer container 367e6194dcf29fb47bcc54cfbd2747aac07c9a9dc4be468aa26a6e1c893d71f6. May 8 00:04:35.189021 systemd[1]: Started cri-containerd-3e6c75f9be3e6b353427d4217c3108562387b9d5bf44089673885c9ac16f0993.scope - libcontainer container 3e6c75f9be3e6b353427d4217c3108562387b9d5bf44089673885c9ac16f0993. May 8 00:04:35.190865 systemd[1]: Started cri-containerd-ee8b4b98f822c2df215036616711e9dd7c0a1b9ded6fa99b7ca384956a33dca0.scope - libcontainer container ee8b4b98f822c2df215036616711e9dd7c0a1b9ded6fa99b7ca384956a33dca0. May 8 00:04:35.486720 containerd[1505]: time="2025-05-08T00:04:35.486171200Z" level=info msg="StartContainer for \"367e6194dcf29fb47bcc54cfbd2747aac07c9a9dc4be468aa26a6e1c893d71f6\" returns successfully" May 8 00:04:35.488198 containerd[1505]: time="2025-05-08T00:04:35.487089906Z" level=info msg="StartContainer for \"3e6c75f9be3e6b353427d4217c3108562387b9d5bf44089673885c9ac16f0993\" returns successfully" May 8 00:04:35.488198 containerd[1505]: time="2025-05-08T00:04:35.487094037Z" level=info msg="StartContainer for \"ee8b4b98f822c2df215036616711e9dd7c0a1b9ded6fa99b7ca384956a33dca0\" returns successfully" May 8 00:04:36.626991 kubelet[2242]: E0508 00:04:36.626940 2242 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:04:36.726094 kubelet[2242]: E0508 00:04:36.726042 2242 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 8 00:04:37.082207 kubelet[2242]: E0508 00:04:37.082059 2242 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 8 00:04:37.103332 kubelet[2242]: I0508 00:04:37.103284 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:04:37.148213 kubelet[2242]: I0508 00:04:37.148155 2242 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 8 00:04:37.148213 kubelet[2242]: E0508 00:04:37.148203 2242 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 8 00:04:37.157392 kubelet[2242]: E0508 00:04:37.157358 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:37.257817 kubelet[2242]: E0508 00:04:37.257748 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:37.358588 kubelet[2242]: E0508 00:04:37.358554 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:37.459761 kubelet[2242]: E0508 00:04:37.459703 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:37.559873 kubelet[2242]: E0508 00:04:37.559817 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:37.660569 kubelet[2242]: E0508 00:04:37.660413 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:37.761078 kubelet[2242]: E0508 00:04:37.761010 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:37.862152 kubelet[2242]: E0508 00:04:37.862090 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:37.962784 kubelet[2242]: E0508 00:04:37.962643 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:38.063252 kubelet[2242]: E0508 00:04:38.063195 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:38.163878 kubelet[2242]: E0508 00:04:38.163814 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:38.264535 kubelet[2242]: E0508 00:04:38.264399 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:38.364789 kubelet[2242]: E0508 00:04:38.364739 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:38.464821 kubelet[2242]: E0508 00:04:38.464785 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:38.565485 kubelet[2242]: E0508 00:04:38.565341 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:38.665876 kubelet[2242]: E0508 00:04:38.665809 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:38.766445 kubelet[2242]: E0508 00:04:38.766384 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:38.866584 kubelet[2242]: E0508 00:04:38.866527 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:38.967134 kubelet[2242]: E0508 00:04:38.967072 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:39.067699 kubelet[2242]: E0508 00:04:39.067635 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:39.168325 kubelet[2242]: E0508 00:04:39.168173 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:39.268891 kubelet[2242]: E0508 00:04:39.268837 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:39.369127 kubelet[2242]: E0508 00:04:39.369071 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:39.469915 kubelet[2242]: E0508 00:04:39.469778 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:39.570394 kubelet[2242]: E0508 00:04:39.570355 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:39.670880 kubelet[2242]: E0508 00:04:39.670823 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:39.771564 kubelet[2242]: E0508 00:04:39.771420 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:39.872200 kubelet[2242]: E0508 00:04:39.872143 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:39.972813 kubelet[2242]: E0508 00:04:39.972756 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:40.073284 kubelet[2242]: E0508 00:04:40.073141 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:40.173638 kubelet[2242]: E0508 00:04:40.173588 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:40.274189 kubelet[2242]: E0508 00:04:40.274131 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:04:40.363098 kubelet[2242]: I0508 00:04:40.363059 2242 apiserver.go:52] "Watching apiserver" May 8 00:04:40.370829 kubelet[2242]: I0508 00:04:40.370783 2242 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 8 00:04:41.205021 systemd[1]: Reload requested from client PID 2523 ('systemctl') (unit session-7.scope)... May 8 00:04:41.205045 systemd[1]: Reloading... May 8 00:04:41.301732 zram_generator::config[2566]: No configuration found. May 8 00:04:41.433110 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:04:41.555810 systemd[1]: Reloading finished in 350 ms. May 8 00:04:41.585693 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:04:41.612245 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:04:41.612597 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:04:41.612673 systemd[1]: kubelet.service: Consumed 1.187s CPU time, 121.3M memory peak. May 8 00:04:41.623033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:04:41.805275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:04:41.809870 (kubelet)[2612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:04:41.847916 kubelet[2612]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:04:41.847916 kubelet[2612]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:04:41.847916 kubelet[2612]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:04:41.848356 kubelet[2612]: I0508 00:04:41.847981 2612 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:04:41.856018 kubelet[2612]: I0508 00:04:41.855972 2612 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 8 00:04:41.856018 kubelet[2612]: I0508 00:04:41.856010 2612 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:04:41.856305 kubelet[2612]: I0508 00:04:41.856282 2612 server.go:929] "Client rotation is on, will bootstrap in background" May 8 00:04:41.857623 kubelet[2612]: I0508 00:04:41.857598 2612 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:04:41.860113 kubelet[2612]: I0508 00:04:41.860081 2612 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:04:41.863238 kubelet[2612]: E0508 00:04:41.863191 2612 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:04:41.863238 kubelet[2612]: I0508 00:04:41.863221 2612 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:04:41.868591 kubelet[2612]: I0508 00:04:41.868554 2612 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:04:41.868799 kubelet[2612]: I0508 00:04:41.868734 2612 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 8 00:04:41.868927 kubelet[2612]: I0508 00:04:41.868867 2612 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:04:41.869187 kubelet[2612]: I0508 00:04:41.868915 2612 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:04:41.869187 kubelet[2612]: I0508 00:04:41.869184 2612 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:04:41.869293 kubelet[2612]: I0508 00:04:41.869194 2612 container_manager_linux.go:300] "Creating device plugin manager" May 8 00:04:41.869293 kubelet[2612]: I0508 00:04:41.869231 2612 state_mem.go:36] "Initialized new in-memory state store" May 8 00:04:41.869411 kubelet[2612]: I0508 00:04:41.869384 2612 kubelet.go:408] "Attempting to sync node with API server" May 8 00:04:41.869411 kubelet[2612]: I0508 00:04:41.869403 2612 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:04:41.869467 kubelet[2612]: I0508 00:04:41.869449 2612 kubelet.go:314] "Adding apiserver pod source" May 8 00:04:41.869467 kubelet[2612]: I0508 00:04:41.869467 2612 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:04:41.876342 kubelet[2612]: I0508 00:04:41.876321 2612 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:04:41.877185 kubelet[2612]: I0508 00:04:41.876916 2612 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:04:41.878267 kubelet[2612]: I0508 00:04:41.878237 2612 server.go:1269] "Started kubelet" May 8 00:04:41.879534 kubelet[2612]: I0508 00:04:41.878489 2612 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:04:41.879534 kubelet[2612]: I0508 00:04:41.878533 2612 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:04:41.879534 kubelet[2612]: I0508 00:04:41.878884 2612 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:04:41.879912 kubelet[2612]: I0508 00:04:41.879883 2612 server.go:460] "Adding debug handlers to kubelet server" May 8 00:04:41.880044 kubelet[2612]: I0508 00:04:41.880016 2612 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:04:41.881587 kubelet[2612]: I0508 00:04:41.881552 2612 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:04:41.884202 kubelet[2612]: I0508 00:04:41.883754 2612 volume_manager.go:289] "Starting Kubelet Volume Manager" May 8 00:04:41.884202 kubelet[2612]: I0508 00:04:41.883862 2612 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 8 00:04:41.884202 kubelet[2612]: I0508 00:04:41.884006 2612 reconciler.go:26] "Reconciler: start to sync state" May 8 00:04:41.886949 kubelet[2612]: E0508 00:04:41.885838 2612 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:04:41.886949 kubelet[2612]: I0508 00:04:41.886910 2612 factory.go:221] Registration of the systemd container factory successfully May 8 00:04:41.887046 kubelet[2612]: I0508 00:04:41.887016 2612 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:04:41.889057 kubelet[2612]: I0508 00:04:41.888900 2612 factory.go:221] Registration of the containerd container factory successfully May 8 00:04:41.896317 kubelet[2612]: I0508 00:04:41.896273 2612 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:04:41.897569 kubelet[2612]: I0508 00:04:41.897535 2612 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:04:41.897611 kubelet[2612]: I0508 00:04:41.897583 2612 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:04:41.897611 kubelet[2612]: I0508 00:04:41.897603 2612 kubelet.go:2321] "Starting kubelet main sync loop" May 8 00:04:41.897717 kubelet[2612]: E0508 00:04:41.897660 2612 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:04:41.941325 kubelet[2612]: I0508 00:04:41.941285 2612 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:04:41.941325 kubelet[2612]: I0508 00:04:41.941310 2612 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:04:41.941325 kubelet[2612]: I0508 00:04:41.941331 2612 state_mem.go:36] "Initialized new in-memory state store" May 8 00:04:41.941526 kubelet[2612]: I0508 00:04:41.941507 2612 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:04:41.941549 kubelet[2612]: I0508 00:04:41.941523 2612 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:04:41.941549 kubelet[2612]: I0508 00:04:41.941542 2612 policy_none.go:49] "None policy: Start" May 8 00:04:41.942141 kubelet[2612]: I0508 00:04:41.942114 2612 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:04:41.942141 kubelet[2612]: I0508 00:04:41.942137 2612 state_mem.go:35] "Initializing new in-memory state store" May 8 00:04:41.942309 kubelet[2612]: I0508 00:04:41.942288 2612 state_mem.go:75] "Updated machine memory state" May 8 00:04:41.947132 kubelet[2612]: I0508 00:04:41.947091 2612 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:04:41.947376 kubelet[2612]: I0508 00:04:41.947290 2612 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:04:41.947376 kubelet[2612]: I0508 00:04:41.947307 2612 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:04:41.947554 kubelet[2612]: I0508 00:04:41.947526 2612 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:04:42.051183 kubelet[2612]: I0508 00:04:42.051102 2612 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:04:42.059548 kubelet[2612]: I0508 00:04:42.059340 2612 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 8 00:04:42.059548 kubelet[2612]: I0508 00:04:42.059454 2612 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 8 00:04:42.085308 kubelet[2612]: I0508 00:04:42.084606 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4e110e6cd779cb0bea97f3f5a72b9687-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4e110e6cd779cb0bea97f3f5a72b9687\") " pod="kube-system/kube-apiserver-localhost" May 8 00:04:42.085308 kubelet[2612]: I0508 00:04:42.084644 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:04:42.085308 kubelet[2612]: I0508 00:04:42.084665 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:04:42.085308 kubelet[2612]: I0508 00:04:42.084716 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 8 00:04:42.085308 kubelet[2612]: I0508 00:04:42.084738 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4e110e6cd779cb0bea97f3f5a72b9687-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4e110e6cd779cb0bea97f3f5a72b9687\") " pod="kube-system/kube-apiserver-localhost" May 8 00:04:42.085634 kubelet[2612]: I0508 00:04:42.084758 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4e110e6cd779cb0bea97f3f5a72b9687-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4e110e6cd779cb0bea97f3f5a72b9687\") " pod="kube-system/kube-apiserver-localhost" May 8 00:04:42.085634 kubelet[2612]: I0508 00:04:42.084778 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:04:42.085634 kubelet[2612]: I0508 00:04:42.084797 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:04:42.085634 kubelet[2612]: I0508 00:04:42.084832 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:04:42.151303 sudo[2643]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 00:04:42.151767 sudo[2643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 8 00:04:42.772848 sudo[2643]: pam_unix(sudo:session): session closed for user root May 8 00:04:42.871280 kubelet[2612]: I0508 00:04:42.871220 2612 apiserver.go:52] "Watching apiserver" May 8 00:04:42.884393 kubelet[2612]: I0508 00:04:42.884343 2612 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 8 00:04:43.101100 kubelet[2612]: E0508 00:04:43.100372 2612 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 8 00:04:43.101100 kubelet[2612]: E0508 00:04:43.100424 2612 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:04:43.101100 kubelet[2612]: E0508 00:04:43.100701 2612 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 8 00:04:43.160731 kubelet[2612]: I0508 00:04:43.160475 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.160459663 podStartE2EDuration="1.160459663s" podCreationTimestamp="2025-05-08 00:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:04:43.089584381 +0000 UTC m=+1.275809566" watchObservedRunningTime="2025-05-08 00:04:43.160459663 +0000 UTC m=+1.346684848" May 8 00:04:43.169701 kubelet[2612]: I0508 00:04:43.169646 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.169633346 podStartE2EDuration="1.169633346s" podCreationTimestamp="2025-05-08 00:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:04:43.160721273 +0000 UTC m=+1.346946468" watchObservedRunningTime="2025-05-08 00:04:43.169633346 +0000 UTC m=+1.355858531" May 8 00:04:43.177849 kubelet[2612]: I0508 00:04:43.177794 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.177782202 podStartE2EDuration="1.177782202s" podCreationTimestamp="2025-05-08 00:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:04:43.169764036 +0000 UTC m=+1.355989221" watchObservedRunningTime="2025-05-08 00:04:43.177782202 +0000 UTC m=+1.364007387" May 8 00:04:43.742807 update_engine[1500]: I20250508 00:04:43.742726 1500 update_attempter.cc:509] Updating boot flags... May 8 00:04:44.042745 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2672) May 8 00:04:44.099700 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2671) May 8 00:04:44.683302 sudo[1690]: pam_unix(sudo:session): session closed for user root May 8 00:04:44.684930 sshd[1689]: Connection closed by 10.0.0.1 port 53140 May 8 00:04:44.685362 sshd-session[1686]: pam_unix(sshd:session): session closed for user core May 8 00:04:44.689559 systemd[1]: sshd@6-10.0.0.54:22-10.0.0.1:53140.service: Deactivated successfully. May 8 00:04:44.692130 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:04:44.692407 systemd[1]: session-7.scope: Consumed 4.841s CPU time, 257.9M memory peak. May 8 00:04:44.693735 systemd-logind[1497]: Session 7 logged out. Waiting for processes to exit. May 8 00:04:44.694644 systemd-logind[1497]: Removed session 7. May 8 00:04:45.655191 kubelet[2612]: I0508 00:04:45.655157 2612 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:04:45.655669 containerd[1505]: time="2025-05-08T00:04:45.655505909Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:04:45.656032 kubelet[2612]: I0508 00:04:45.655691 2612 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:04:47.310098 kubelet[2612]: W0508 00:04:47.310050 2612 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 8 00:04:47.310098 kubelet[2612]: E0508 00:04:47.310106 2612 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 8 00:04:47.310707 kubelet[2612]: W0508 00:04:47.310291 2612 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 8 00:04:47.310707 kubelet[2612]: E0508 00:04:47.310311 2612 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 8 00:04:47.317190 kubelet[2612]: I0508 00:04:47.317142 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ef569c3-1896-45a6-815f-e7fa624dd004-lib-modules\") pod \"kube-proxy-7c665\" (UID: \"2ef569c3-1896-45a6-815f-e7fa624dd004\") " pod="kube-system/kube-proxy-7c665" May 8 00:04:47.317591 kubelet[2612]: I0508 00:04:47.317405 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-cilium-run\") pod \"cilium-hz9qv\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " pod="kube-system/cilium-hz9qv" May 8 00:04:47.317591 kubelet[2612]: I0508 00:04:47.317460 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-cilium-config-path\") pod \"cilium-hz9qv\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " pod="kube-system/cilium-hz9qv" May 8 00:04:47.317591 kubelet[2612]: I0508 00:04:47.317481 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-host-proc-sys-kernel\") pod \"cilium-hz9qv\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " pod="kube-system/cilium-hz9qv" May 8 00:04:47.317591 kubelet[2612]: I0508 00:04:47.317524 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-hubble-tls\") pod \"cilium-hz9qv\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " pod="kube-system/cilium-hz9qv" May 8 00:04:47.317591 kubelet[2612]: I0508 00:04:47.317548 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-etc-cni-netd\") pod \"cilium-hz9qv\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " pod="kube-system/cilium-hz9qv" May 8 00:04:47.318286 kubelet[2612]: I0508 00:04:47.317571 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sggtj\" (UniqueName: \"kubernetes.io/projected/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-kube-api-access-sggtj\") pod \"cilium-hz9qv\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " pod="kube-system/cilium-hz9qv" May 8 00:04:47.318286 kubelet[2612]: I0508 00:04:47.317871 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfc75\" (UniqueName: \"kubernetes.io/projected/2ef569c3-1896-45a6-815f-e7fa624dd004-kube-api-access-jfc75\") pod \"kube-proxy-7c665\" (UID: \"2ef569c3-1896-45a6-815f-e7fa624dd004\") " pod="kube-system/kube-proxy-7c665" May 8 00:04:47.318286 kubelet[2612]: I0508 00:04:47.317891 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-xtables-lock\") pod \"cilium-hz9qv\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " pod="kube-system/cilium-hz9qv" May 8 00:04:47.318286 kubelet[2612]: I0508 00:04:47.318027 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ef569c3-1896-45a6-815f-e7fa624dd004-xtables-lock\") pod \"kube-proxy-7c665\" (UID: \"2ef569c3-1896-45a6-815f-e7fa624dd004\") " pod="kube-system/kube-proxy-7c665" May 8 00:04:47.318286 kubelet[2612]: I0508 00:04:47.318045 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-hostproc\") pod \"cilium-hz9qv\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " pod="kube-system/cilium-hz9qv" May 8 00:04:47.318476 kubelet[2612]: I0508 00:04:47.318076 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-clustermesh-secrets\") pod \"cilium-hz9qv\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " pod="kube-system/cilium-hz9qv" May 8 00:04:47.318476 kubelet[2612]: I0508 00:04:47.318108 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2ef569c3-1896-45a6-815f-e7fa624dd004-kube-proxy\") pod \"kube-proxy-7c665\" (UID: \"2ef569c3-1896-45a6-815f-e7fa624dd004\") " pod="kube-system/kube-proxy-7c665" May 8 00:04:47.318476 kubelet[2612]: I0508 00:04:47.318136 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-bpf-maps\") pod \"cilium-hz9qv\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " pod="kube-system/cilium-hz9qv" May 8 00:04:47.318476 kubelet[2612]: I0508 00:04:47.318162 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-cilium-cgroup\") pod \"cilium-hz9qv\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " pod="kube-system/cilium-hz9qv" May 8 00:04:47.318476 kubelet[2612]: I0508 00:04:47.318192 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-cni-path\") pod \"cilium-hz9qv\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " pod="kube-system/cilium-hz9qv" May 8 00:04:47.318476 kubelet[2612]: I0508 00:04:47.318213 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-lib-modules\") pod \"cilium-hz9qv\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " pod="kube-system/cilium-hz9qv" May 8 00:04:47.318689 kubelet[2612]: I0508 00:04:47.318233 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-host-proc-sys-net\") pod \"cilium-hz9qv\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " pod="kube-system/cilium-hz9qv" May 8 00:04:47.323036 systemd[1]: Created slice kubepods-besteffort-pod2ef569c3_1896_45a6_815f_e7fa624dd004.slice - libcontainer container kubepods-besteffort-pod2ef569c3_1896_45a6_815f_e7fa624dd004.slice. May 8 00:04:47.336640 systemd[1]: Created slice kubepods-burstable-podda0c1ec8_5fcd_4576_9f6c_7096cf27fea1.slice - libcontainer container kubepods-burstable-podda0c1ec8_5fcd_4576_9f6c_7096cf27fea1.slice. May 8 00:04:47.661864 systemd[1]: Created slice kubepods-besteffort-pod67b31f7a_3aa4_4c11_9f6a_4d38877cc5e8.slice - libcontainer container kubepods-besteffort-pod67b31f7a_3aa4_4c11_9f6a_4d38877cc5e8.slice. May 8 00:04:47.720721 kubelet[2612]: I0508 00:04:47.720644 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw2zj\" (UniqueName: \"kubernetes.io/projected/67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8-kube-api-access-bw2zj\") pod \"cilium-operator-5d85765b45-kc2z9\" (UID: \"67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8\") " pod="kube-system/cilium-operator-5d85765b45-kc2z9" May 8 00:04:47.720912 kubelet[2612]: I0508 00:04:47.720757 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8-cilium-config-path\") pod \"cilium-operator-5d85765b45-kc2z9\" (UID: \"67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8\") " pod="kube-system/cilium-operator-5d85765b45-kc2z9" May 8 00:04:48.240884 containerd[1505]: time="2025-05-08T00:04:48.240840053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hz9qv,Uid:da0c1ec8-5fcd-4576-9f6c-7096cf27fea1,Namespace:kube-system,Attempt:0,}" May 8 00:04:48.265272 containerd[1505]: time="2025-05-08T00:04:48.265225563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-kc2z9,Uid:67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8,Namespace:kube-system,Attempt:0,}" May 8 00:04:48.534333 containerd[1505]: time="2025-05-08T00:04:48.534198503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7c665,Uid:2ef569c3-1896-45a6-815f-e7fa624dd004,Namespace:kube-system,Attempt:0,}" May 8 00:04:48.673711 containerd[1505]: time="2025-05-08T00:04:48.672312130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:04:48.673711 containerd[1505]: time="2025-05-08T00:04:48.672400233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:04:48.673711 containerd[1505]: time="2025-05-08T00:04:48.672418153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:04:48.673711 containerd[1505]: time="2025-05-08T00:04:48.673385311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:04:48.677499 containerd[1505]: time="2025-05-08T00:04:48.677058725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:04:48.677597 containerd[1505]: time="2025-05-08T00:04:48.677572605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:04:48.677718 containerd[1505]: time="2025-05-08T00:04:48.677656991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:04:48.677868 containerd[1505]: time="2025-05-08T00:04:48.677842459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:04:48.680360 containerd[1505]: time="2025-05-08T00:04:48.680115050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:04:48.680360 containerd[1505]: time="2025-05-08T00:04:48.680174281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:04:48.680360 containerd[1505]: time="2025-05-08T00:04:48.680208856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:04:48.680360 containerd[1505]: time="2025-05-08T00:04:48.680277718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:04:48.698848 systemd[1]: Started cri-containerd-0c7d5ecadba6c5107b23ae62d5dad9e6b39b8cfbb074c1f606eabdafa1ecea27.scope - libcontainer container 0c7d5ecadba6c5107b23ae62d5dad9e6b39b8cfbb074c1f606eabdafa1ecea27. May 8 00:04:48.704145 systemd[1]: Started cri-containerd-c6d71e4e5e668bb68e420999204cbc05b5f7b35c771f7670774ee4b3583bcd75.scope - libcontainer container c6d71e4e5e668bb68e420999204cbc05b5f7b35c771f7670774ee4b3583bcd75. May 8 00:04:48.706794 systemd[1]: Started cri-containerd-d333f52ba84cb4de8a7d430932bf8fb1b5a7664377cb7f2b22c8d3dfb727c243.scope - libcontainer container d333f52ba84cb4de8a7d430932bf8fb1b5a7664377cb7f2b22c8d3dfb727c243. May 8 00:04:48.737618 containerd[1505]: time="2025-05-08T00:04:48.737565459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7c665,Uid:2ef569c3-1896-45a6-815f-e7fa624dd004,Namespace:kube-system,Attempt:0,} returns sandbox id \"d333f52ba84cb4de8a7d430932bf8fb1b5a7664377cb7f2b22c8d3dfb727c243\"" May 8 00:04:48.740687 containerd[1505]: time="2025-05-08T00:04:48.740522836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hz9qv,Uid:da0c1ec8-5fcd-4576-9f6c-7096cf27fea1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6d71e4e5e668bb68e420999204cbc05b5f7b35c771f7670774ee4b3583bcd75\"" May 8 00:04:48.744748 containerd[1505]: time="2025-05-08T00:04:48.743924922Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 00:04:48.745201 containerd[1505]: time="2025-05-08T00:04:48.745168458Z" level=info msg="CreateContainer within sandbox \"d333f52ba84cb4de8a7d430932bf8fb1b5a7664377cb7f2b22c8d3dfb727c243\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:04:48.751133 containerd[1505]: time="2025-05-08T00:04:48.751093436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-kc2z9,Uid:67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c7d5ecadba6c5107b23ae62d5dad9e6b39b8cfbb074c1f606eabdafa1ecea27\"" May 8 00:04:48.765851 containerd[1505]: time="2025-05-08T00:04:48.765810180Z" level=info msg="CreateContainer within sandbox \"d333f52ba84cb4de8a7d430932bf8fb1b5a7664377cb7f2b22c8d3dfb727c243\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e7bfceee6e4885b703004f1ebddc422d6f120131dbb2a58448d94ad0eb876edd\"" May 8 00:04:48.766522 containerd[1505]: time="2025-05-08T00:04:48.766450189Z" level=info msg="StartContainer for \"e7bfceee6e4885b703004f1ebddc422d6f120131dbb2a58448d94ad0eb876edd\"" May 8 00:04:48.802870 systemd[1]: Started cri-containerd-e7bfceee6e4885b703004f1ebddc422d6f120131dbb2a58448d94ad0eb876edd.scope - libcontainer container e7bfceee6e4885b703004f1ebddc422d6f120131dbb2a58448d94ad0eb876edd. May 8 00:04:48.932117 containerd[1505]: time="2025-05-08T00:04:48.932065443Z" level=info msg="StartContainer for \"e7bfceee6e4885b703004f1ebddc422d6f120131dbb2a58448d94ad0eb876edd\" returns successfully" May 8 00:04:50.226612 kubelet[2612]: I0508 00:04:50.226531 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7c665" podStartSLOduration=4.226511772 podStartE2EDuration="4.226511772s" podCreationTimestamp="2025-05-08 00:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:04:49.949127101 +0000 UTC m=+8.135352286" watchObservedRunningTime="2025-05-08 00:04:50.226511772 +0000 UTC m=+8.412736967" May 8 00:04:52.892631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1670189964.mount: Deactivated successfully. May 8 00:04:58.198718 containerd[1505]: time="2025-05-08T00:04:58.198609522Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:58.199843 containerd[1505]: time="2025-05-08T00:04:58.199802915Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 8 00:04:58.201568 containerd[1505]: time="2025-05-08T00:04:58.201454213Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:04:58.202933 containerd[1505]: time="2025-05-08T00:04:58.202885089Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.458925834s" May 8 00:04:58.202933 containerd[1505]: time="2025-05-08T00:04:58.202932650Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 8 00:04:58.204480 containerd[1505]: time="2025-05-08T00:04:58.204440087Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 00:04:58.209509 containerd[1505]: time="2025-05-08T00:04:58.209336871Z" level=info msg="CreateContainer within sandbox \"c6d71e4e5e668bb68e420999204cbc05b5f7b35c771f7670774ee4b3583bcd75\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:04:58.231759 containerd[1505]: time="2025-05-08T00:04:58.231707471Z" level=info msg="CreateContainer within sandbox \"c6d71e4e5e668bb68e420999204cbc05b5f7b35c771f7670774ee4b3583bcd75\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"02724e46cfc8a59c4bf8112975bc342e55ad728c926cf3c53f59d94f175f49d6\"" May 8 00:04:58.233554 containerd[1505]: time="2025-05-08T00:04:58.232412705Z" level=info msg="StartContainer for \"02724e46cfc8a59c4bf8112975bc342e55ad728c926cf3c53f59d94f175f49d6\"" May 8 00:04:58.271918 systemd[1]: Started cri-containerd-02724e46cfc8a59c4bf8112975bc342e55ad728c926cf3c53f59d94f175f49d6.scope - libcontainer container 02724e46cfc8a59c4bf8112975bc342e55ad728c926cf3c53f59d94f175f49d6. May 8 00:04:58.958755 systemd[1]: cri-containerd-02724e46cfc8a59c4bf8112975bc342e55ad728c926cf3c53f59d94f175f49d6.scope: Deactivated successfully. May 8 00:04:59.435246 containerd[1505]: time="2025-05-08T00:04:59.435190607Z" level=info msg="StartContainer for \"02724e46cfc8a59c4bf8112975bc342e55ad728c926cf3c53f59d94f175f49d6\" returns successfully" May 8 00:04:59.455437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02724e46cfc8a59c4bf8112975bc342e55ad728c926cf3c53f59d94f175f49d6-rootfs.mount: Deactivated successfully. May 8 00:04:59.906823 containerd[1505]: time="2025-05-08T00:04:59.906733441Z" level=info msg="shim disconnected" id=02724e46cfc8a59c4bf8112975bc342e55ad728c926cf3c53f59d94f175f49d6 namespace=k8s.io May 8 00:04:59.906823 containerd[1505]: time="2025-05-08T00:04:59.906797906Z" level=warning msg="cleaning up after shim disconnected" id=02724e46cfc8a59c4bf8112975bc342e55ad728c926cf3c53f59d94f175f49d6 namespace=k8s.io May 8 00:04:59.906823 containerd[1505]: time="2025-05-08T00:04:59.906806873Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:05:00.674573 containerd[1505]: time="2025-05-08T00:05:00.674519958Z" level=info msg="CreateContainer within sandbox \"c6d71e4e5e668bb68e420999204cbc05b5f7b35c771f7670774ee4b3583bcd75\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:05:00.712487 containerd[1505]: time="2025-05-08T00:05:00.712423381Z" level=info msg="CreateContainer within sandbox \"c6d71e4e5e668bb68e420999204cbc05b5f7b35c771f7670774ee4b3583bcd75\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3ad934a101c149ce91ae055e9a911248627949defd4886669565c61fb6934238\"" May 8 00:05:00.715923 containerd[1505]: time="2025-05-08T00:05:00.715884617Z" level=info msg="StartContainer for \"3ad934a101c149ce91ae055e9a911248627949defd4886669565c61fb6934238\"" May 8 00:05:00.747839 systemd[1]: Started cri-containerd-3ad934a101c149ce91ae055e9a911248627949defd4886669565c61fb6934238.scope - libcontainer container 3ad934a101c149ce91ae055e9a911248627949defd4886669565c61fb6934238. May 8 00:05:00.775533 containerd[1505]: time="2025-05-08T00:05:00.775479948Z" level=info msg="StartContainer for \"3ad934a101c149ce91ae055e9a911248627949defd4886669565c61fb6934238\" returns successfully" May 8 00:05:00.789555 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:05:00.789815 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:05:00.790011 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 8 00:05:00.796994 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:05:00.798849 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:05:00.799287 systemd[1]: cri-containerd-3ad934a101c149ce91ae055e9a911248627949defd4886669565c61fb6934238.scope: Deactivated successfully. May 8 00:05:00.817048 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:05:00.820031 containerd[1505]: time="2025-05-08T00:05:00.819974675Z" level=info msg="shim disconnected" id=3ad934a101c149ce91ae055e9a911248627949defd4886669565c61fb6934238 namespace=k8s.io May 8 00:05:00.820031 containerd[1505]: time="2025-05-08T00:05:00.820030240Z" level=warning msg="cleaning up after shim disconnected" id=3ad934a101c149ce91ae055e9a911248627949defd4886669565c61fb6934238 namespace=k8s.io May 8 00:05:00.820156 containerd[1505]: time="2025-05-08T00:05:00.820039700Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:05:01.676001 containerd[1505]: time="2025-05-08T00:05:01.675958895Z" level=info msg="CreateContainer within sandbox \"c6d71e4e5e668bb68e420999204cbc05b5f7b35c771f7670774ee4b3583bcd75\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:05:01.696776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ad934a101c149ce91ae055e9a911248627949defd4886669565c61fb6934238-rootfs.mount: Deactivated successfully. May 8 00:05:01.702273 containerd[1505]: time="2025-05-08T00:05:01.702231169Z" level=info msg="CreateContainer within sandbox \"c6d71e4e5e668bb68e420999204cbc05b5f7b35c771f7670774ee4b3583bcd75\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2ff1e4fb774681d04b5d3774a5ab0e46e236181cce921d0cb9212dce4fca0639\"" May 8 00:05:01.703748 containerd[1505]: time="2025-05-08T00:05:01.702825096Z" level=info msg="StartContainer for \"2ff1e4fb774681d04b5d3774a5ab0e46e236181cce921d0cb9212dce4fca0639\"" May 8 00:05:01.731852 systemd[1]: Started cri-containerd-2ff1e4fb774681d04b5d3774a5ab0e46e236181cce921d0cb9212dce4fca0639.scope - libcontainer container 2ff1e4fb774681d04b5d3774a5ab0e46e236181cce921d0cb9212dce4fca0639. May 8 00:05:01.767853 containerd[1505]: time="2025-05-08T00:05:01.767797831Z" level=info msg="StartContainer for \"2ff1e4fb774681d04b5d3774a5ab0e46e236181cce921d0cb9212dce4fca0639\" returns successfully" May 8 00:05:01.769293 systemd[1]: cri-containerd-2ff1e4fb774681d04b5d3774a5ab0e46e236181cce921d0cb9212dce4fca0639.scope: Deactivated successfully. May 8 00:05:01.793795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ff1e4fb774681d04b5d3774a5ab0e46e236181cce921d0cb9212dce4fca0639-rootfs.mount: Deactivated successfully. May 8 00:05:01.841457 containerd[1505]: time="2025-05-08T00:05:01.841383974Z" level=info msg="shim disconnected" id=2ff1e4fb774681d04b5d3774a5ab0e46e236181cce921d0cb9212dce4fca0639 namespace=k8s.io May 8 00:05:01.841457 containerd[1505]: time="2025-05-08T00:05:01.841440640Z" level=warning msg="cleaning up after shim disconnected" id=2ff1e4fb774681d04b5d3774a5ab0e46e236181cce921d0cb9212dce4fca0639 namespace=k8s.io May 8 00:05:01.841457 containerd[1505]: time="2025-05-08T00:05:01.841449429Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:05:01.859639 containerd[1505]: time="2025-05-08T00:05:01.858057886Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:05:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:05:01.996709 containerd[1505]: time="2025-05-08T00:05:01.996546007Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:05:01.997788 containerd[1505]: time="2025-05-08T00:05:01.997738400Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 8 00:05:01.999056 containerd[1505]: time="2025-05-08T00:05:01.999036472Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:05:02.000666 containerd[1505]: time="2025-05-08T00:05:02.000634765Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.796158803s" May 8 00:05:02.000666 containerd[1505]: time="2025-05-08T00:05:02.000660979Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 8 00:05:02.002298 containerd[1505]: time="2025-05-08T00:05:02.002257394Z" level=info msg="CreateContainer within sandbox \"0c7d5ecadba6c5107b23ae62d5dad9e6b39b8cfbb074c1f606eabdafa1ecea27\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 00:05:02.016258 containerd[1505]: time="2025-05-08T00:05:02.016219913Z" level=info msg="CreateContainer within sandbox \"0c7d5ecadba6c5107b23ae62d5dad9e6b39b8cfbb074c1f606eabdafa1ecea27\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f\"" May 8 00:05:02.016882 containerd[1505]: time="2025-05-08T00:05:02.016839350Z" level=info msg="StartContainer for \"98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f\"" May 8 00:05:02.046851 systemd[1]: Started cri-containerd-98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f.scope - libcontainer container 98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f. May 8 00:05:02.072172 containerd[1505]: time="2025-05-08T00:05:02.072109234Z" level=info msg="StartContainer for \"98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f\" returns successfully" May 8 00:05:02.680773 containerd[1505]: time="2025-05-08T00:05:02.680729516Z" level=info msg="CreateContainer within sandbox \"c6d71e4e5e668bb68e420999204cbc05b5f7b35c771f7670774ee4b3583bcd75\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:05:02.836871 kubelet[2612]: I0508 00:05:02.836792 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-kc2z9" podStartSLOduration=2.58835406 podStartE2EDuration="15.836768584s" podCreationTimestamp="2025-05-08 00:04:47 +0000 UTC" firstStartedPulling="2025-05-08 00:04:48.752854371 +0000 UTC m=+6.939079556" lastFinishedPulling="2025-05-08 00:05:02.001268895 +0000 UTC m=+20.187494080" observedRunningTime="2025-05-08 00:05:02.836462925 +0000 UTC m=+21.022688110" watchObservedRunningTime="2025-05-08 00:05:02.836768584 +0000 UTC m=+21.022993769" May 8 00:05:02.866137 containerd[1505]: time="2025-05-08T00:05:02.866091042Z" level=info msg="CreateContainer within sandbox \"c6d71e4e5e668bb68e420999204cbc05b5f7b35c771f7670774ee4b3583bcd75\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"aefd2060e46c5cb292e78e3ee942881d33d156243a535a0274e14ab85042f1a4\"" May 8 00:05:02.867594 containerd[1505]: time="2025-05-08T00:05:02.866722903Z" level=info msg="StartContainer for \"aefd2060e46c5cb292e78e3ee942881d33d156243a535a0274e14ab85042f1a4\"" May 8 00:05:02.911812 systemd[1]: Started cri-containerd-aefd2060e46c5cb292e78e3ee942881d33d156243a535a0274e14ab85042f1a4.scope - libcontainer container aefd2060e46c5cb292e78e3ee942881d33d156243a535a0274e14ab85042f1a4. May 8 00:05:02.945946 systemd[1]: cri-containerd-aefd2060e46c5cb292e78e3ee942881d33d156243a535a0274e14ab85042f1a4.scope: Deactivated successfully. May 8 00:05:02.948397 containerd[1505]: time="2025-05-08T00:05:02.948245002Z" level=info msg="StartContainer for \"aefd2060e46c5cb292e78e3ee942881d33d156243a535a0274e14ab85042f1a4\" returns successfully" May 8 00:05:03.148456 containerd[1505]: time="2025-05-08T00:05:03.148368568Z" level=info msg="shim disconnected" id=aefd2060e46c5cb292e78e3ee942881d33d156243a535a0274e14ab85042f1a4 namespace=k8s.io May 8 00:05:03.148456 containerd[1505]: time="2025-05-08T00:05:03.148434973Z" level=warning msg="cleaning up after shim disconnected" id=aefd2060e46c5cb292e78e3ee942881d33d156243a535a0274e14ab85042f1a4 namespace=k8s.io May 8 00:05:03.148456 containerd[1505]: time="2025-05-08T00:05:03.148446527Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:05:03.686850 containerd[1505]: time="2025-05-08T00:05:03.686795851Z" level=info msg="CreateContainer within sandbox \"c6d71e4e5e668bb68e420999204cbc05b5f7b35c771f7670774ee4b3583bcd75\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:05:03.696133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aefd2060e46c5cb292e78e3ee942881d33d156243a535a0274e14ab85042f1a4-rootfs.mount: Deactivated successfully. May 8 00:05:03.872928 containerd[1505]: time="2025-05-08T00:05:03.872866344Z" level=info msg="CreateContainer within sandbox \"c6d71e4e5e668bb68e420999204cbc05b5f7b35c771f7670774ee4b3583bcd75\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e\"" May 8 00:05:03.873351 containerd[1505]: time="2025-05-08T00:05:03.873295254Z" level=info msg="StartContainer for \"ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e\"" May 8 00:05:03.904818 systemd[1]: Started cri-containerd-ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e.scope - libcontainer container ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e. May 8 00:05:03.955833 containerd[1505]: time="2025-05-08T00:05:03.955687151Z" level=info msg="StartContainer for \"ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e\" returns successfully" May 8 00:05:04.101637 kubelet[2612]: I0508 00:05:04.101578 2612 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 8 00:05:04.198298 systemd[1]: Created slice kubepods-burstable-poddc38f068_e13a_4b85_aa6b_3571967ff53d.slice - libcontainer container kubepods-burstable-poddc38f068_e13a_4b85_aa6b_3571967ff53d.slice. May 8 00:05:04.204022 systemd[1]: Created slice kubepods-burstable-pod19935a3b_cd66_4f9e_94e3_87c5c7626c83.slice - libcontainer container kubepods-burstable-pod19935a3b_cd66_4f9e_94e3_87c5c7626c83.slice. May 8 00:05:04.329853 kubelet[2612]: I0508 00:05:04.329715 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc38f068-e13a-4b85-aa6b-3571967ff53d-config-volume\") pod \"coredns-6f6b679f8f-9f4hf\" (UID: \"dc38f068-e13a-4b85-aa6b-3571967ff53d\") " pod="kube-system/coredns-6f6b679f8f-9f4hf" May 8 00:05:04.329853 kubelet[2612]: I0508 00:05:04.329756 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqjzv\" (UniqueName: \"kubernetes.io/projected/19935a3b-cd66-4f9e-94e3-87c5c7626c83-kube-api-access-jqjzv\") pod \"coredns-6f6b679f8f-sx8m5\" (UID: \"19935a3b-cd66-4f9e-94e3-87c5c7626c83\") " pod="kube-system/coredns-6f6b679f8f-sx8m5" May 8 00:05:04.329853 kubelet[2612]: I0508 00:05:04.329779 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19935a3b-cd66-4f9e-94e3-87c5c7626c83-config-volume\") pod \"coredns-6f6b679f8f-sx8m5\" (UID: \"19935a3b-cd66-4f9e-94e3-87c5c7626c83\") " pod="kube-system/coredns-6f6b679f8f-sx8m5" May 8 00:05:04.329853 kubelet[2612]: I0508 00:05:04.329796 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcs69\" (UniqueName: \"kubernetes.io/projected/dc38f068-e13a-4b85-aa6b-3571967ff53d-kube-api-access-pcs69\") pod \"coredns-6f6b679f8f-9f4hf\" (UID: \"dc38f068-e13a-4b85-aa6b-3571967ff53d\") " pod="kube-system/coredns-6f6b679f8f-9f4hf" May 8 00:05:04.507195 containerd[1505]: time="2025-05-08T00:05:04.507145644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sx8m5,Uid:19935a3b-cd66-4f9e-94e3-87c5c7626c83,Namespace:kube-system,Attempt:0,}" May 8 00:05:04.511986 containerd[1505]: time="2025-05-08T00:05:04.511929021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9f4hf,Uid:dc38f068-e13a-4b85-aa6b-3571967ff53d,Namespace:kube-system,Attempt:0,}" May 8 00:05:04.728264 kubelet[2612]: I0508 00:05:04.727815 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hz9qv" podStartSLOduration=9.265059126 podStartE2EDuration="18.727795106s" podCreationTimestamp="2025-05-08 00:04:46 +0000 UTC" firstStartedPulling="2025-05-08 00:04:48.741541247 +0000 UTC m=+6.927766432" lastFinishedPulling="2025-05-08 00:04:58.204277177 +0000 UTC m=+16.390502412" observedRunningTime="2025-05-08 00:05:04.726246194 +0000 UTC m=+22.912471379" watchObservedRunningTime="2025-05-08 00:05:04.727795106 +0000 UTC m=+22.914020291" May 8 00:05:06.213449 systemd-networkd[1438]: cilium_host: Link UP May 8 00:05:06.213831 systemd-networkd[1438]: cilium_net: Link UP May 8 00:05:06.214149 systemd-networkd[1438]: cilium_net: Gained carrier May 8 00:05:06.214355 systemd-networkd[1438]: cilium_host: Gained carrier May 8 00:05:06.254834 systemd-networkd[1438]: cilium_host: Gained IPv6LL May 8 00:05:06.331566 systemd-networkd[1438]: cilium_vxlan: Link UP May 8 00:05:06.331581 systemd-networkd[1438]: cilium_vxlan: Gained carrier May 8 00:05:06.541706 kernel: NET: Registered PF_ALG protocol family May 8 00:05:06.904901 systemd-networkd[1438]: cilium_net: Gained IPv6LL May 8 00:05:07.244573 systemd-networkd[1438]: lxc_health: Link UP May 8 00:05:07.265181 systemd-networkd[1438]: lxc_health: Gained carrier May 8 00:05:07.721327 systemd-networkd[1438]: lxcade446b164c8: Link UP May 8 00:05:07.732722 kernel: eth0: renamed from tmp48eea May 8 00:05:07.744708 kernel: eth0: renamed from tmp907f3 May 8 00:05:07.752528 systemd-networkd[1438]: lxc471a99307154: Link UP May 8 00:05:07.753323 systemd-networkd[1438]: lxc471a99307154: Gained carrier May 8 00:05:07.756748 systemd-networkd[1438]: lxcade446b164c8: Gained carrier May 8 00:05:08.254729 systemd-networkd[1438]: cilium_vxlan: Gained IPv6LL May 8 00:05:08.568940 systemd-networkd[1438]: lxc_health: Gained IPv6LL May 8 00:05:08.778827 systemd[1]: Started sshd@7-10.0.0.54:22-10.0.0.1:60340.service - OpenSSH per-connection server daemon (10.0.0.1:60340). May 8 00:05:08.822450 sshd[3823]: Accepted publickey for core from 10.0.0.1 port 60340 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:05:08.823726 sshd-session[3823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:08.827951 systemd-logind[1497]: New session 8 of user core. May 8 00:05:08.837889 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:05:08.969080 sshd[3825]: Connection closed by 10.0.0.1 port 60340 May 8 00:05:08.970775 sshd-session[3823]: pam_unix(sshd:session): session closed for user core May 8 00:05:08.975070 systemd[1]: sshd@7-10.0.0.54:22-10.0.0.1:60340.service: Deactivated successfully. May 8 00:05:08.977422 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:05:08.978210 systemd-logind[1497]: Session 8 logged out. Waiting for processes to exit. May 8 00:05:08.979359 systemd-logind[1497]: Removed session 8. May 8 00:05:09.144839 systemd-networkd[1438]: lxcade446b164c8: Gained IPv6LL May 8 00:05:09.208880 systemd-networkd[1438]: lxc471a99307154: Gained IPv6LL May 8 00:05:11.250347 containerd[1505]: time="2025-05-08T00:05:11.250220120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:05:11.250347 containerd[1505]: time="2025-05-08T00:05:11.250286585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:05:11.250347 containerd[1505]: time="2025-05-08T00:05:11.250297987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:05:11.250875 containerd[1505]: time="2025-05-08T00:05:11.250384532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:05:11.271838 systemd[1]: Started cri-containerd-48eeac704bf62d8bfa7b14e39fb5e823a89525b4d73237846a1403475acb734a.scope - libcontainer container 48eeac704bf62d8bfa7b14e39fb5e823a89525b4d73237846a1403475acb734a. May 8 00:05:11.282003 containerd[1505]: time="2025-05-08T00:05:11.281722308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:05:11.282003 containerd[1505]: time="2025-05-08T00:05:11.281801207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:05:11.282003 containerd[1505]: time="2025-05-08T00:05:11.281814935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:05:11.282003 containerd[1505]: time="2025-05-08T00:05:11.281913213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:05:11.288427 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:05:11.316846 systemd[1]: Started cri-containerd-907f38a0f942401ed9b3d60f47ec6092f129fcf74a8204bdd6e1c2347c44f698.scope - libcontainer container 907f38a0f942401ed9b3d60f47ec6092f129fcf74a8204bdd6e1c2347c44f698. May 8 00:05:11.328052 containerd[1505]: time="2025-05-08T00:05:11.328012482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sx8m5,Uid:19935a3b-cd66-4f9e-94e3-87c5c7626c83,Namespace:kube-system,Attempt:0,} returns sandbox id \"48eeac704bf62d8bfa7b14e39fb5e823a89525b4d73237846a1403475acb734a\"" May 8 00:05:11.329871 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:05:11.336877 containerd[1505]: time="2025-05-08T00:05:11.336848158Z" level=info msg="CreateContainer within sandbox \"48eeac704bf62d8bfa7b14e39fb5e823a89525b4d73237846a1403475acb734a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:05:11.359340 containerd[1505]: time="2025-05-08T00:05:11.359297301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9f4hf,Uid:dc38f068-e13a-4b85-aa6b-3571967ff53d,Namespace:kube-system,Attempt:0,} returns sandbox id \"907f38a0f942401ed9b3d60f47ec6092f129fcf74a8204bdd6e1c2347c44f698\"" May 8 00:05:11.361203 containerd[1505]: time="2025-05-08T00:05:11.361175981Z" level=info msg="CreateContainer within sandbox \"907f38a0f942401ed9b3d60f47ec6092f129fcf74a8204bdd6e1c2347c44f698\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:05:12.230650 kubelet[2612]: I0508 00:05:12.230595 2612 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:05:12.298739 containerd[1505]: time="2025-05-08T00:05:12.298629252Z" level=info msg="CreateContainer within sandbox \"907f38a0f942401ed9b3d60f47ec6092f129fcf74a8204bdd6e1c2347c44f698\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"99bd5ff2415267d80380ff10cdc94d923864b3e58cfad31c4a37d14a60132b9f\"" May 8 00:05:12.299173 containerd[1505]: time="2025-05-08T00:05:12.299127595Z" level=info msg="StartContainer for \"99bd5ff2415267d80380ff10cdc94d923864b3e58cfad31c4a37d14a60132b9f\"" May 8 00:05:12.323524 systemd[1]: run-containerd-runc-k8s.io-99bd5ff2415267d80380ff10cdc94d923864b3e58cfad31c4a37d14a60132b9f-runc.66YuxE.mount: Deactivated successfully. May 8 00:05:12.325505 containerd[1505]: time="2025-05-08T00:05:12.325449954Z" level=info msg="CreateContainer within sandbox \"48eeac704bf62d8bfa7b14e39fb5e823a89525b4d73237846a1403475acb734a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6c88c9b1e74e6191de206845d5315652fff83358bc674941f83dd585899d615e\"" May 8 00:05:12.326704 containerd[1505]: time="2025-05-08T00:05:12.326029091Z" level=info msg="StartContainer for \"6c88c9b1e74e6191de206845d5315652fff83358bc674941f83dd585899d615e\"" May 8 00:05:12.334932 systemd[1]: Started cri-containerd-99bd5ff2415267d80380ff10cdc94d923864b3e58cfad31c4a37d14a60132b9f.scope - libcontainer container 99bd5ff2415267d80380ff10cdc94d923864b3e58cfad31c4a37d14a60132b9f. May 8 00:05:12.365865 systemd[1]: Started cri-containerd-6c88c9b1e74e6191de206845d5315652fff83358bc674941f83dd585899d615e.scope - libcontainer container 6c88c9b1e74e6191de206845d5315652fff83358bc674941f83dd585899d615e. May 8 00:05:12.610950 containerd[1505]: time="2025-05-08T00:05:12.610729731Z" level=info msg="StartContainer for \"99bd5ff2415267d80380ff10cdc94d923864b3e58cfad31c4a37d14a60132b9f\" returns successfully" May 8 00:05:12.610950 containerd[1505]: time="2025-05-08T00:05:12.610838581Z" level=info msg="StartContainer for \"6c88c9b1e74e6191de206845d5315652fff83358bc674941f83dd585899d615e\" returns successfully" May 8 00:05:12.773364 kubelet[2612]: I0508 00:05:12.772843 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-9f4hf" podStartSLOduration=25.772823794 podStartE2EDuration="25.772823794s" podCreationTimestamp="2025-05-08 00:04:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:05:12.743437374 +0000 UTC m=+30.929662559" watchObservedRunningTime="2025-05-08 00:05:12.772823794 +0000 UTC m=+30.959048989" May 8 00:05:13.739143 kubelet[2612]: I0508 00:05:13.739069 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-sx8m5" podStartSLOduration=26.739051299 podStartE2EDuration="26.739051299s" podCreationTimestamp="2025-05-08 00:04:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:05:12.831296531 +0000 UTC m=+31.017521716" watchObservedRunningTime="2025-05-08 00:05:13.739051299 +0000 UTC m=+31.925276494" May 8 00:05:13.985742 systemd[1]: Started sshd@8-10.0.0.54:22-10.0.0.1:60346.service - OpenSSH per-connection server daemon (10.0.0.1:60346). May 8 00:05:14.029434 sshd[4024]: Accepted publickey for core from 10.0.0.1 port 60346 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:05:14.031097 sshd-session[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:14.035714 systemd-logind[1497]: New session 9 of user core. May 8 00:05:14.052813 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:05:14.169454 sshd[4026]: Connection closed by 10.0.0.1 port 60346 May 8 00:05:14.169916 sshd-session[4024]: pam_unix(sshd:session): session closed for user core May 8 00:05:14.173997 systemd[1]: sshd@8-10.0.0.54:22-10.0.0.1:60346.service: Deactivated successfully. May 8 00:05:14.176462 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:05:14.177379 systemd-logind[1497]: Session 9 logged out. Waiting for processes to exit. May 8 00:05:14.178326 systemd-logind[1497]: Removed session 9. May 8 00:05:19.188131 systemd[1]: Started sshd@9-10.0.0.54:22-10.0.0.1:34440.service - OpenSSH per-connection server daemon (10.0.0.1:34440). May 8 00:05:19.230836 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 34440 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:05:19.232456 sshd-session[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:19.236661 systemd-logind[1497]: New session 10 of user core. May 8 00:05:19.246973 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:05:19.365368 sshd[4045]: Connection closed by 10.0.0.1 port 34440 May 8 00:05:19.365781 sshd-session[4043]: pam_unix(sshd:session): session closed for user core May 8 00:05:19.370457 systemd[1]: sshd@9-10.0.0.54:22-10.0.0.1:34440.service: Deactivated successfully. May 8 00:05:19.372863 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:05:19.373637 systemd-logind[1497]: Session 10 logged out. Waiting for processes to exit. May 8 00:05:19.374575 systemd-logind[1497]: Removed session 10. May 8 00:05:24.384314 systemd[1]: Started sshd@10-10.0.0.54:22-10.0.0.1:34454.service - OpenSSH per-connection server daemon (10.0.0.1:34454). May 8 00:05:24.425071 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 34454 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:05:24.426698 sshd-session[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:24.431254 systemd-logind[1497]: New session 11 of user core. May 8 00:05:24.442807 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:05:24.551278 sshd[4061]: Connection closed by 10.0.0.1 port 34454 May 8 00:05:24.551809 sshd-session[4059]: pam_unix(sshd:session): session closed for user core May 8 00:05:24.556829 systemd[1]: sshd@10-10.0.0.54:22-10.0.0.1:34454.service: Deactivated successfully. May 8 00:05:24.559883 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:05:24.560758 systemd-logind[1497]: Session 11 logged out. Waiting for processes to exit. May 8 00:05:24.561754 systemd-logind[1497]: Removed session 11. May 8 00:05:29.568208 systemd[1]: Started sshd@11-10.0.0.54:22-10.0.0.1:34660.service - OpenSSH per-connection server daemon (10.0.0.1:34660). May 8 00:05:29.609957 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 34660 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:05:29.611815 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:29.616743 systemd-logind[1497]: New session 12 of user core. May 8 00:05:29.631127 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:05:29.741557 sshd[4077]: Connection closed by 10.0.0.1 port 34660 May 8 00:05:29.742026 sshd-session[4075]: pam_unix(sshd:session): session closed for user core May 8 00:05:29.750954 systemd[1]: sshd@11-10.0.0.54:22-10.0.0.1:34660.service: Deactivated successfully. May 8 00:05:29.753399 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:05:29.755253 systemd-logind[1497]: Session 12 logged out. Waiting for processes to exit. May 8 00:05:29.762012 systemd[1]: Started sshd@12-10.0.0.54:22-10.0.0.1:34662.service - OpenSSH per-connection server daemon (10.0.0.1:34662). May 8 00:05:29.763262 systemd-logind[1497]: Removed session 12. May 8 00:05:29.799591 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 34662 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:05:29.801255 sshd-session[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:29.806402 systemd-logind[1497]: New session 13 of user core. May 8 00:05:29.815818 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:05:29.963531 sshd[4094]: Connection closed by 10.0.0.1 port 34662 May 8 00:05:29.964110 sshd-session[4091]: pam_unix(sshd:session): session closed for user core May 8 00:05:29.976000 systemd[1]: sshd@12-10.0.0.54:22-10.0.0.1:34662.service: Deactivated successfully. May 8 00:05:29.979632 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:05:29.983663 systemd-logind[1497]: Session 13 logged out. Waiting for processes to exit. May 8 00:05:29.992465 systemd[1]: Started sshd@13-10.0.0.54:22-10.0.0.1:34678.service - OpenSSH per-connection server daemon (10.0.0.1:34678). May 8 00:05:29.994555 systemd-logind[1497]: Removed session 13. May 8 00:05:30.031107 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 34678 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:05:30.032876 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:30.038252 systemd-logind[1497]: New session 14 of user core. May 8 00:05:30.051851 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:05:30.166366 sshd[4107]: Connection closed by 10.0.0.1 port 34678 May 8 00:05:30.166837 sshd-session[4104]: pam_unix(sshd:session): session closed for user core May 8 00:05:30.171188 systemd[1]: sshd@13-10.0.0.54:22-10.0.0.1:34678.service: Deactivated successfully. May 8 00:05:30.173868 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:05:30.174873 systemd-logind[1497]: Session 14 logged out. Waiting for processes to exit. May 8 00:05:30.175941 systemd-logind[1497]: Removed session 14. May 8 00:05:35.179482 systemd[1]: Started sshd@14-10.0.0.54:22-10.0.0.1:34686.service - OpenSSH per-connection server daemon (10.0.0.1:34686). May 8 00:05:35.218534 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 34686 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:05:35.220093 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:35.224222 systemd-logind[1497]: New session 15 of user core. May 8 00:05:35.233817 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:05:35.356067 sshd[4124]: Connection closed by 10.0.0.1 port 34686 May 8 00:05:35.356414 sshd-session[4122]: pam_unix(sshd:session): session closed for user core May 8 00:05:35.360121 systemd[1]: sshd@14-10.0.0.54:22-10.0.0.1:34686.service: Deactivated successfully. May 8 00:05:35.362344 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:05:35.363053 systemd-logind[1497]: Session 15 logged out. Waiting for processes to exit. May 8 00:05:35.363844 systemd-logind[1497]: Removed session 15. May 8 00:05:40.368803 systemd[1]: Started sshd@15-10.0.0.54:22-10.0.0.1:60434.service - OpenSSH per-connection server daemon (10.0.0.1:60434). May 8 00:05:40.407593 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 60434 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:05:40.409139 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:40.413156 systemd-logind[1497]: New session 16 of user core. May 8 00:05:40.426814 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:05:40.542179 sshd[4139]: Connection closed by 10.0.0.1 port 60434 May 8 00:05:40.542648 sshd-session[4137]: pam_unix(sshd:session): session closed for user core May 8 00:05:40.547358 systemd[1]: sshd@15-10.0.0.54:22-10.0.0.1:60434.service: Deactivated successfully. May 8 00:05:40.549639 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:05:40.550370 systemd-logind[1497]: Session 16 logged out. Waiting for processes to exit. May 8 00:05:40.551313 systemd-logind[1497]: Removed session 16. May 8 00:05:45.556312 systemd[1]: Started sshd@16-10.0.0.54:22-10.0.0.1:60436.service - OpenSSH per-connection server daemon (10.0.0.1:60436). May 8 00:05:45.661109 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 60436 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:05:45.662819 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:45.667636 systemd-logind[1497]: New session 17 of user core. May 8 00:05:45.674810 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:05:45.805171 sshd[4157]: Connection closed by 10.0.0.1 port 60436 May 8 00:05:45.805775 sshd-session[4155]: pam_unix(sshd:session): session closed for user core May 8 00:05:45.821256 systemd[1]: sshd@16-10.0.0.54:22-10.0.0.1:60436.service: Deactivated successfully. May 8 00:05:45.824625 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:05:45.829109 systemd-logind[1497]: Session 17 logged out. Waiting for processes to exit. May 8 00:05:45.842121 systemd[1]: Started sshd@17-10.0.0.54:22-10.0.0.1:60452.service - OpenSSH per-connection server daemon (10.0.0.1:60452). May 8 00:05:45.843626 systemd-logind[1497]: Removed session 17. May 8 00:05:45.880415 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 60452 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:05:45.882210 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:45.888388 systemd-logind[1497]: New session 18 of user core. May 8 00:05:45.898944 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:05:46.172926 sshd[4172]: Connection closed by 10.0.0.1 port 60452 May 8 00:05:46.173309 sshd-session[4169]: pam_unix(sshd:session): session closed for user core May 8 00:05:46.190436 systemd[1]: sshd@17-10.0.0.54:22-10.0.0.1:60452.service: Deactivated successfully. May 8 00:05:46.193542 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:05:46.195428 systemd-logind[1497]: Session 18 logged out. Waiting for processes to exit. May 8 00:05:46.208131 systemd[1]: Started sshd@18-10.0.0.54:22-10.0.0.1:60468.service - OpenSSH per-connection server daemon (10.0.0.1:60468). May 8 00:05:46.209485 systemd-logind[1497]: Removed session 18. May 8 00:05:46.249659 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 60468 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:05:46.251924 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:46.257520 systemd-logind[1497]: New session 19 of user core. May 8 00:05:46.266826 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:05:48.197842 sshd[4185]: Connection closed by 10.0.0.1 port 60468 May 8 00:05:48.198415 sshd-session[4182]: pam_unix(sshd:session): session closed for user core May 8 00:05:48.214326 systemd[1]: sshd@18-10.0.0.54:22-10.0.0.1:60468.service: Deactivated successfully. May 8 00:05:48.217340 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:05:48.219733 systemd-logind[1497]: Session 19 logged out. Waiting for processes to exit. May 8 00:05:48.229078 systemd[1]: Started sshd@19-10.0.0.54:22-10.0.0.1:45270.service - OpenSSH per-connection server daemon (10.0.0.1:45270). May 8 00:05:48.230214 systemd-logind[1497]: Removed session 19. May 8 00:05:48.266364 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 45270 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:05:48.268382 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:48.273312 systemd-logind[1497]: New session 20 of user core. May 8 00:05:48.284824 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:05:48.969107 sshd[4206]: Connection closed by 10.0.0.1 port 45270 May 8 00:05:48.969569 sshd-session[4203]: pam_unix(sshd:session): session closed for user core May 8 00:05:48.983470 systemd[1]: sshd@19-10.0.0.54:22-10.0.0.1:45270.service: Deactivated successfully. May 8 00:05:48.985598 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:05:48.987822 systemd-logind[1497]: Session 20 logged out. Waiting for processes to exit. May 8 00:05:48.992919 systemd[1]: Started sshd@20-10.0.0.54:22-10.0.0.1:45284.service - OpenSSH per-connection server daemon (10.0.0.1:45284). May 8 00:05:48.994074 systemd-logind[1497]: Removed session 20. May 8 00:05:49.027942 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 45284 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:05:49.029767 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:49.034312 systemd-logind[1497]: New session 21 of user core. May 8 00:05:49.048944 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:05:49.209915 sshd[4221]: Connection closed by 10.0.0.1 port 45284 May 8 00:05:49.210308 sshd-session[4216]: pam_unix(sshd:session): session closed for user core May 8 00:05:49.214849 systemd[1]: sshd@20-10.0.0.54:22-10.0.0.1:45284.service: Deactivated successfully. May 8 00:05:49.217210 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:05:49.217925 systemd-logind[1497]: Session 21 logged out. Waiting for processes to exit. May 8 00:05:49.218770 systemd-logind[1497]: Removed session 21. May 8 00:05:54.222760 systemd[1]: Started sshd@21-10.0.0.54:22-10.0.0.1:45290.service - OpenSSH per-connection server daemon (10.0.0.1:45290). May 8 00:05:54.294081 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 45290 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:05:54.296398 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:54.304101 systemd-logind[1497]: New session 22 of user core. May 8 00:05:54.314122 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:05:54.491819 sshd[4237]: Connection closed by 10.0.0.1 port 45290 May 8 00:05:54.492268 sshd-session[4235]: pam_unix(sshd:session): session closed for user core May 8 00:05:54.499747 systemd[1]: sshd@21-10.0.0.54:22-10.0.0.1:45290.service: Deactivated successfully. May 8 00:05:54.504341 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:05:54.507718 systemd-logind[1497]: Session 22 logged out. Waiting for processes to exit. May 8 00:05:54.509278 systemd-logind[1497]: Removed session 22. May 8 00:05:59.519004 systemd[1]: Started sshd@22-10.0.0.54:22-10.0.0.1:43948.service - OpenSSH per-connection server daemon (10.0.0.1:43948). May 8 00:05:59.553909 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 43948 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:05:59.555621 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:05:59.559977 systemd-logind[1497]: New session 23 of user core. May 8 00:05:59.574898 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:05:59.679181 sshd[4253]: Connection closed by 10.0.0.1 port 43948 May 8 00:05:59.679574 sshd-session[4251]: pam_unix(sshd:session): session closed for user core May 8 00:05:59.683560 systemd[1]: sshd@22-10.0.0.54:22-10.0.0.1:43948.service: Deactivated successfully. May 8 00:05:59.685973 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:05:59.686736 systemd-logind[1497]: Session 23 logged out. Waiting for processes to exit. May 8 00:05:59.687721 systemd-logind[1497]: Removed session 23. May 8 00:06:04.691611 systemd[1]: Started sshd@23-10.0.0.54:22-10.0.0.1:43958.service - OpenSSH per-connection server daemon (10.0.0.1:43958). May 8 00:06:04.730770 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 43958 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:06:04.732326 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:06:04.736437 systemd-logind[1497]: New session 24 of user core. May 8 00:06:04.742855 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 00:06:04.900742 sshd[4271]: Connection closed by 10.0.0.1 port 43958 May 8 00:06:04.901085 sshd-session[4269]: pam_unix(sshd:session): session closed for user core May 8 00:06:04.904772 systemd[1]: sshd@23-10.0.0.54:22-10.0.0.1:43958.service: Deactivated successfully. May 8 00:06:04.906796 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:06:04.908069 systemd-logind[1497]: Session 24 logged out. Waiting for processes to exit. May 8 00:06:04.909135 systemd-logind[1497]: Removed session 24. May 8 00:06:09.913081 systemd[1]: Started sshd@24-10.0.0.54:22-10.0.0.1:49130.service - OpenSSH per-connection server daemon (10.0.0.1:49130). May 8 00:06:09.952776 sshd[4284]: Accepted publickey for core from 10.0.0.1 port 49130 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:06:09.954310 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:06:09.958325 systemd-logind[1497]: New session 25 of user core. May 8 00:06:09.972910 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 00:06:10.080303 sshd[4286]: Connection closed by 10.0.0.1 port 49130 May 8 00:06:10.080758 sshd-session[4284]: pam_unix(sshd:session): session closed for user core May 8 00:06:10.085261 systemd[1]: sshd@24-10.0.0.54:22-10.0.0.1:49130.service: Deactivated successfully. May 8 00:06:10.087707 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:06:10.088588 systemd-logind[1497]: Session 25 logged out. Waiting for processes to exit. May 8 00:06:10.089532 systemd-logind[1497]: Removed session 25. May 8 00:06:15.098232 systemd[1]: Started sshd@25-10.0.0.54:22-10.0.0.1:49136.service - OpenSSH per-connection server daemon (10.0.0.1:49136). May 8 00:06:15.140753 sshd[4299]: Accepted publickey for core from 10.0.0.1 port 49136 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:06:15.142357 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:06:15.146758 systemd-logind[1497]: New session 26 of user core. May 8 00:06:15.157832 systemd[1]: Started session-26.scope - Session 26 of User core. May 8 00:06:15.316309 sshd[4301]: Connection closed by 10.0.0.1 port 49136 May 8 00:06:15.316700 sshd-session[4299]: pam_unix(sshd:session): session closed for user core May 8 00:06:15.326528 systemd[1]: sshd@25-10.0.0.54:22-10.0.0.1:49136.service: Deactivated successfully. May 8 00:06:15.328635 systemd[1]: session-26.scope: Deactivated successfully. May 8 00:06:15.330370 systemd-logind[1497]: Session 26 logged out. Waiting for processes to exit. May 8 00:06:15.340064 systemd[1]: Started sshd@26-10.0.0.54:22-10.0.0.1:49148.service - OpenSSH per-connection server daemon (10.0.0.1:49148). May 8 00:06:15.341067 systemd-logind[1497]: Removed session 26. May 8 00:06:15.375071 sshd[4314]: Accepted publickey for core from 10.0.0.1 port 49148 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:06:15.376602 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:06:15.381168 systemd-logind[1497]: New session 27 of user core. May 8 00:06:15.388810 systemd[1]: Started session-27.scope - Session 27 of User core. May 8 00:06:16.816456 containerd[1505]: time="2025-05-08T00:06:16.816378609Z" level=info msg="StopContainer for \"98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f\" with timeout 30 (s)" May 8 00:06:16.834713 containerd[1505]: time="2025-05-08T00:06:16.832767030Z" level=info msg="Stop container \"98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f\" with signal terminated" May 8 00:06:16.845287 systemd[1]: cri-containerd-98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f.scope: Deactivated successfully. May 8 00:06:16.846719 containerd[1505]: time="2025-05-08T00:06:16.846647454Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:06:16.856517 containerd[1505]: time="2025-05-08T00:06:16.856474779Z" level=info msg="StopContainer for \"ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e\" with timeout 2 (s)" May 8 00:06:16.856753 containerd[1505]: time="2025-05-08T00:06:16.856735637Z" level=info msg="Stop container \"ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e\" with signal terminated" May 8 00:06:16.863996 systemd-networkd[1438]: lxc_health: Link DOWN May 8 00:06:16.864009 systemd-networkd[1438]: lxc_health: Lost carrier May 8 00:06:16.872718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f-rootfs.mount: Deactivated successfully. May 8 00:06:16.881815 containerd[1505]: time="2025-05-08T00:06:16.881757449Z" level=info msg="shim disconnected" id=98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f namespace=k8s.io May 8 00:06:16.881815 containerd[1505]: time="2025-05-08T00:06:16.881808547Z" level=warning msg="cleaning up after shim disconnected" id=98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f namespace=k8s.io May 8 00:06:16.881815 containerd[1505]: time="2025-05-08T00:06:16.881816391Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:06:16.889533 systemd[1]: cri-containerd-ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e.scope: Deactivated successfully. May 8 00:06:16.889961 systemd[1]: cri-containerd-ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e.scope: Consumed 7.066s CPU time, 124.6M memory peak, 544K read from disk, 13.3M written to disk. May 8 00:06:16.905621 containerd[1505]: time="2025-05-08T00:06:16.905571451Z" level=info msg="StopContainer for \"98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f\" returns successfully" May 8 00:06:16.909343 containerd[1505]: time="2025-05-08T00:06:16.909268962Z" level=info msg="StopPodSandbox for \"0c7d5ecadba6c5107b23ae62d5dad9e6b39b8cfbb074c1f606eabdafa1ecea27\"" May 8 00:06:16.913218 containerd[1505]: time="2025-05-08T00:06:16.909513930Z" level=info msg="Container to stop \"98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:06:16.915431 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c7d5ecadba6c5107b23ae62d5dad9e6b39b8cfbb074c1f606eabdafa1ecea27-shm.mount: Deactivated successfully. May 8 00:06:16.919482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e-rootfs.mount: Deactivated successfully. May 8 00:06:16.920316 systemd[1]: cri-containerd-0c7d5ecadba6c5107b23ae62d5dad9e6b39b8cfbb074c1f606eabdafa1ecea27.scope: Deactivated successfully. May 8 00:06:16.926587 containerd[1505]: time="2025-05-08T00:06:16.926511817Z" level=info msg="shim disconnected" id=ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e namespace=k8s.io May 8 00:06:16.926587 containerd[1505]: time="2025-05-08T00:06:16.926574477Z" level=warning msg="cleaning up after shim disconnected" id=ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e namespace=k8s.io May 8 00:06:16.926587 containerd[1505]: time="2025-05-08T00:06:16.926583044Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:06:16.943547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c7d5ecadba6c5107b23ae62d5dad9e6b39b8cfbb074c1f606eabdafa1ecea27-rootfs.mount: Deactivated successfully. May 8 00:06:16.947947 containerd[1505]: time="2025-05-08T00:06:16.947875754Z" level=info msg="shim disconnected" id=0c7d5ecadba6c5107b23ae62d5dad9e6b39b8cfbb074c1f606eabdafa1ecea27 namespace=k8s.io May 8 00:06:16.947947 containerd[1505]: time="2025-05-08T00:06:16.947938714Z" level=warning msg="cleaning up after shim disconnected" id=0c7d5ecadba6c5107b23ae62d5dad9e6b39b8cfbb074c1f606eabdafa1ecea27 namespace=k8s.io May 8 00:06:16.947947 containerd[1505]: time="2025-05-08T00:06:16.947948323Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:06:16.951583 containerd[1505]: time="2025-05-08T00:06:16.951551603Z" level=info msg="StopContainer for \"ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e\" returns successfully" May 8 00:06:16.953097 containerd[1505]: time="2025-05-08T00:06:16.953067152Z" level=info msg="StopPodSandbox for \"c6d71e4e5e668bb68e420999204cbc05b5f7b35c771f7670774ee4b3583bcd75\"" May 8 00:06:16.953196 containerd[1505]: time="2025-05-08T00:06:16.953105224Z" level=info msg="Container to stop \"02724e46cfc8a59c4bf8112975bc342e55ad728c926cf3c53f59d94f175f49d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:06:16.953196 containerd[1505]: time="2025-05-08T00:06:16.953133639Z" level=info msg="Container to stop \"2ff1e4fb774681d04b5d3774a5ab0e46e236181cce921d0cb9212dce4fca0639\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:06:16.953196 containerd[1505]: time="2025-05-08T00:06:16.953142696Z" level=info msg="Container to stop \"ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:06:16.953196 containerd[1505]: time="2025-05-08T00:06:16.953151814Z" level=info msg="Container to stop \"3ad934a101c149ce91ae055e9a911248627949defd4886669565c61fb6934238\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:06:16.953196 containerd[1505]: time="2025-05-08T00:06:16.953160650Z" level=info msg="Container to stop \"aefd2060e46c5cb292e78e3ee942881d33d156243a535a0274e14ab85042f1a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:06:16.961103 systemd[1]: cri-containerd-c6d71e4e5e668bb68e420999204cbc05b5f7b35c771f7670774ee4b3583bcd75.scope: Deactivated successfully. May 8 00:06:16.964321 containerd[1505]: time="2025-05-08T00:06:16.964267100Z" level=info msg="TearDown network for sandbox \"0c7d5ecadba6c5107b23ae62d5dad9e6b39b8cfbb074c1f606eabdafa1ecea27\" successfully" May 8 00:06:16.964321 containerd[1505]: time="2025-05-08T00:06:16.964293000Z" level=info msg="StopPodSandbox for \"0c7d5ecadba6c5107b23ae62d5dad9e6b39b8cfbb074c1f606eabdafa1ecea27\" returns successfully" May 8 00:06:16.980839 kubelet[2612]: E0508 00:06:16.980786 2612 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:06:17.006454 containerd[1505]: time="2025-05-08T00:06:17.006210038Z" level=info msg="shim disconnected" id=c6d71e4e5e668bb68e420999204cbc05b5f7b35c771f7670774ee4b3583bcd75 namespace=k8s.io May 8 00:06:17.006454 containerd[1505]: time="2025-05-08T00:06:17.006304940Z" level=warning msg="cleaning up after shim disconnected" id=c6d71e4e5e668bb68e420999204cbc05b5f7b35c771f7670774ee4b3583bcd75 namespace=k8s.io May 8 00:06:17.006454 containerd[1505]: time="2025-05-08T00:06:17.006323335Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:06:17.034243 containerd[1505]: time="2025-05-08T00:06:17.034176813Z" level=info msg="TearDown network for sandbox \"c6d71e4e5e668bb68e420999204cbc05b5f7b35c771f7670774ee4b3583bcd75\" successfully" May 8 00:06:17.034243 containerd[1505]: time="2025-05-08T00:06:17.034228933Z" level=info msg="StopPodSandbox for \"c6d71e4e5e668bb68e420999204cbc05b5f7b35c771f7670774ee4b3583bcd75\" returns successfully" May 8 00:06:17.090740 kubelet[2612]: I0508 00:06:17.090583 2612 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8-cilium-config-path\") pod \"67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8\" (UID: \"67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8\") " May 8 00:06:17.090740 kubelet[2612]: I0508 00:06:17.090623 2612 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-cilium-run\") pod \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " May 8 00:06:17.090740 kubelet[2612]: I0508 00:06:17.090642 2612 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-cni-path\") pod \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " May 8 00:06:17.090740 kubelet[2612]: I0508 00:06:17.090659 2612 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-host-proc-sys-net\") pod \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " May 8 00:06:17.090740 kubelet[2612]: I0508 00:06:17.090695 2612 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-lib-modules\") pod \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " May 8 00:06:17.090740 kubelet[2612]: I0508 00:06:17.090725 2612 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw2zj\" (UniqueName: \"kubernetes.io/projected/67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8-kube-api-access-bw2zj\") pod \"67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8\" (UID: \"67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8\") " May 8 00:06:17.091031 kubelet[2612]: I0508 00:06:17.090737 2612 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-etc-cni-netd\") pod \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " May 8 00:06:17.091031 kubelet[2612]: I0508 00:06:17.090734 2612 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1" (UID: "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:06:17.091031 kubelet[2612]: I0508 00:06:17.090751 2612 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-xtables-lock\") pod \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " May 8 00:06:17.091031 kubelet[2612]: I0508 00:06:17.090789 2612 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1" (UID: "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:06:17.091031 kubelet[2612]: I0508 00:06:17.090811 2612 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-bpf-maps\") pod \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " May 8 00:06:17.091031 kubelet[2612]: I0508 00:06:17.090835 2612 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-clustermesh-secrets\") pod \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " May 8 00:06:17.091191 kubelet[2612]: I0508 00:06:17.090857 2612 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-cilium-config-path\") pod \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " May 8 00:06:17.091191 kubelet[2612]: I0508 00:06:17.090875 2612 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sggtj\" (UniqueName: \"kubernetes.io/projected/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-kube-api-access-sggtj\") pod \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " May 8 00:06:17.091191 kubelet[2612]: I0508 00:06:17.090891 2612 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-host-proc-sys-kernel\") pod \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " May 8 00:06:17.091191 kubelet[2612]: I0508 00:06:17.090908 2612 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-hubble-tls\") pod \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " May 8 00:06:17.091191 kubelet[2612]: I0508 00:06:17.090921 2612 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-hostproc\") pod \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " May 8 00:06:17.091191 kubelet[2612]: I0508 00:06:17.090934 2612 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-cilium-cgroup\") pod \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\" (UID: \"da0c1ec8-5fcd-4576-9f6c-7096cf27fea1\") " May 8 00:06:17.091355 kubelet[2612]: I0508 00:06:17.090978 2612 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-cilium-run\") on node \"localhost\" DevicePath \"\"" May 8 00:06:17.091355 kubelet[2612]: I0508 00:06:17.090987 2612 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:06:17.091355 kubelet[2612]: I0508 00:06:17.090813 2612 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-cni-path" (OuterVolumeSpecName: "cni-path") pod "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1" (UID: "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:06:17.091355 kubelet[2612]: I0508 00:06:17.090824 2612 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1" (UID: "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:06:17.091355 kubelet[2612]: I0508 00:06:17.090834 2612 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1" (UID: "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:06:17.091355 kubelet[2612]: I0508 00:06:17.091008 2612 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1" (UID: "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:06:17.091512 kubelet[2612]: I0508 00:06:17.091024 2612 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1" (UID: "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:06:17.091512 kubelet[2612]: I0508 00:06:17.091414 2612 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1" (UID: "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:06:17.091512 kubelet[2612]: I0508 00:06:17.091439 2612 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1" (UID: "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:06:17.095751 kubelet[2612]: I0508 00:06:17.095520 2612 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8-kube-api-access-bw2zj" (OuterVolumeSpecName: "kube-api-access-bw2zj") pod "67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8" (UID: "67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8"). InnerVolumeSpecName "kube-api-access-bw2zj". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:06:17.095751 kubelet[2612]: I0508 00:06:17.095639 2612 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8" (UID: "67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:06:17.095751 kubelet[2612]: I0508 00:06:17.095693 2612 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-hostproc" (OuterVolumeSpecName: "hostproc") pod "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1" (UID: "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:06:17.096873 kubelet[2612]: I0508 00:06:17.096826 2612 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1" (UID: "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:06:17.096986 kubelet[2612]: I0508 00:06:17.096903 2612 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1" (UID: "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:06:17.097180 kubelet[2612]: I0508 00:06:17.097129 2612 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1" (UID: "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:06:17.098399 kubelet[2612]: I0508 00:06:17.098361 2612 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-kube-api-access-sggtj" (OuterVolumeSpecName: "kube-api-access-sggtj") pod "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1" (UID: "da0c1ec8-5fcd-4576-9f6c-7096cf27fea1"). InnerVolumeSpecName "kube-api-access-sggtj". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:06:17.191866 kubelet[2612]: I0508 00:06:17.191820 2612 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 8 00:06:17.191866 kubelet[2612]: I0508 00:06:17.191858 2612 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-hostproc\") on node \"localhost\" DevicePath \"\"" May 8 00:06:17.191866 kubelet[2612]: I0508 00:06:17.191868 2612 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 8 00:06:17.191866 kubelet[2612]: I0508 00:06:17.191879 2612 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 8 00:06:17.192128 kubelet[2612]: I0508 00:06:17.191887 2612 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:06:17.192128 kubelet[2612]: I0508 00:06:17.191897 2612 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-cni-path\") on node \"localhost\" DevicePath \"\"" May 8 00:06:17.192128 kubelet[2612]: I0508 00:06:17.191905 2612 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 8 00:06:17.192128 kubelet[2612]: I0508 00:06:17.191914 2612 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:06:17.192128 kubelet[2612]: I0508 00:06:17.191922 2612 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bw2zj\" (UniqueName: \"kubernetes.io/projected/67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8-kube-api-access-bw2zj\") on node \"localhost\" DevicePath \"\"" May 8 00:06:17.192128 kubelet[2612]: I0508 00:06:17.191929 2612 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 8 00:06:17.192128 kubelet[2612]: I0508 00:06:17.191937 2612 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 8 00:06:17.192128 kubelet[2612]: I0508 00:06:17.191945 2612 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:06:17.192408 kubelet[2612]: I0508 00:06:17.191952 2612 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-sggtj\" (UniqueName: \"kubernetes.io/projected/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-kube-api-access-sggtj\") on node \"localhost\" DevicePath \"\"" May 8 00:06:17.192408 kubelet[2612]: I0508 00:06:17.191960 2612 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:06:17.825326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6d71e4e5e668bb68e420999204cbc05b5f7b35c771f7670774ee4b3583bcd75-rootfs.mount: Deactivated successfully. May 8 00:06:17.825449 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c6d71e4e5e668bb68e420999204cbc05b5f7b35c771f7670774ee4b3583bcd75-shm.mount: Deactivated successfully. May 8 00:06:17.825530 systemd[1]: var-lib-kubelet-pods-67b31f7a\x2d3aa4\x2d4c11\x2d9f6a\x2d4d38877cc5e8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbw2zj.mount: Deactivated successfully. May 8 00:06:17.825620 systemd[1]: var-lib-kubelet-pods-da0c1ec8\x2d5fcd\x2d4576\x2d9f6c\x2d7096cf27fea1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsggtj.mount: Deactivated successfully. May 8 00:06:17.825725 systemd[1]: var-lib-kubelet-pods-da0c1ec8\x2d5fcd\x2d4576\x2d9f6c\x2d7096cf27fea1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:06:17.825808 systemd[1]: var-lib-kubelet-pods-da0c1ec8\x2d5fcd\x2d4576\x2d9f6c\x2d7096cf27fea1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:06:17.854398 kubelet[2612]: I0508 00:06:17.854366 2612 scope.go:117] "RemoveContainer" containerID="98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f" May 8 00:06:17.856353 containerd[1505]: time="2025-05-08T00:06:17.856233462Z" level=info msg="RemoveContainer for \"98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f\"" May 8 00:06:17.860183 systemd[1]: Removed slice kubepods-besteffort-pod67b31f7a_3aa4_4c11_9f6a_4d38877cc5e8.slice - libcontainer container kubepods-besteffort-pod67b31f7a_3aa4_4c11_9f6a_4d38877cc5e8.slice. May 8 00:06:17.864670 systemd[1]: Removed slice kubepods-burstable-podda0c1ec8_5fcd_4576_9f6c_7096cf27fea1.slice - libcontainer container kubepods-burstable-podda0c1ec8_5fcd_4576_9f6c_7096cf27fea1.slice. May 8 00:06:17.864902 systemd[1]: kubepods-burstable-podda0c1ec8_5fcd_4576_9f6c_7096cf27fea1.slice: Consumed 7.175s CPU time, 124.9M memory peak, 644K read from disk, 13.3M written to disk. May 8 00:06:17.977755 containerd[1505]: time="2025-05-08T00:06:17.977700310Z" level=info msg="RemoveContainer for \"98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f\" returns successfully" May 8 00:06:17.978027 kubelet[2612]: I0508 00:06:17.977978 2612 scope.go:117] "RemoveContainer" containerID="98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f" May 8 00:06:17.978236 containerd[1505]: time="2025-05-08T00:06:17.978192592Z" level=error msg="ContainerStatus for \"98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f\": not found" May 8 00:06:17.985634 kubelet[2612]: E0508 00:06:17.985587 2612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f\": not found" containerID="98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f" May 8 00:06:17.986039 kubelet[2612]: I0508 00:06:17.985617 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f"} err="failed to get container status \"98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f\": rpc error: code = NotFound desc = an error occurred when try to find container \"98c732ffc23241b6d11e90fc4b1d79c55e357eb5465ee66344bd070e6b00105f\": not found" May 8 00:06:17.986039 kubelet[2612]: I0508 00:06:17.985705 2612 scope.go:117] "RemoveContainer" containerID="ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e" May 8 00:06:17.986701 containerd[1505]: time="2025-05-08T00:06:17.986656297Z" level=info msg="RemoveContainer for \"ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e\"" May 8 00:06:18.041457 containerd[1505]: time="2025-05-08T00:06:18.041395974Z" level=info msg="RemoveContainer for \"ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e\" returns successfully" May 8 00:06:18.041746 kubelet[2612]: I0508 00:06:18.041664 2612 scope.go:117] "RemoveContainer" containerID="aefd2060e46c5cb292e78e3ee942881d33d156243a535a0274e14ab85042f1a4" May 8 00:06:18.042771 containerd[1505]: time="2025-05-08T00:06:18.042744907Z" level=info msg="RemoveContainer for \"aefd2060e46c5cb292e78e3ee942881d33d156243a535a0274e14ab85042f1a4\"" May 8 00:06:18.057306 containerd[1505]: time="2025-05-08T00:06:18.057275857Z" level=info msg="RemoveContainer for \"aefd2060e46c5cb292e78e3ee942881d33d156243a535a0274e14ab85042f1a4\" returns successfully" May 8 00:06:18.057524 kubelet[2612]: I0508 00:06:18.057434 2612 scope.go:117] "RemoveContainer" containerID="2ff1e4fb774681d04b5d3774a5ab0e46e236181cce921d0cb9212dce4fca0639" May 8 00:06:18.058523 containerd[1505]: time="2025-05-08T00:06:18.058489921Z" level=info msg="RemoveContainer for \"2ff1e4fb774681d04b5d3774a5ab0e46e236181cce921d0cb9212dce4fca0639\"" May 8 00:06:18.062087 containerd[1505]: time="2025-05-08T00:06:18.062054606Z" level=info msg="RemoveContainer for \"2ff1e4fb774681d04b5d3774a5ab0e46e236181cce921d0cb9212dce4fca0639\" returns successfully" May 8 00:06:18.062259 kubelet[2612]: I0508 00:06:18.062234 2612 scope.go:117] "RemoveContainer" containerID="3ad934a101c149ce91ae055e9a911248627949defd4886669565c61fb6934238" May 8 00:06:18.063256 containerd[1505]: time="2025-05-08T00:06:18.063050142Z" level=info msg="RemoveContainer for \"3ad934a101c149ce91ae055e9a911248627949defd4886669565c61fb6934238\"" May 8 00:06:18.066082 containerd[1505]: time="2025-05-08T00:06:18.066051348Z" level=info msg="RemoveContainer for \"3ad934a101c149ce91ae055e9a911248627949defd4886669565c61fb6934238\" returns successfully" May 8 00:06:18.066201 kubelet[2612]: I0508 00:06:18.066178 2612 scope.go:117] "RemoveContainer" containerID="02724e46cfc8a59c4bf8112975bc342e55ad728c926cf3c53f59d94f175f49d6" May 8 00:06:18.067008 containerd[1505]: time="2025-05-08T00:06:18.066979976Z" level=info msg="RemoveContainer for \"02724e46cfc8a59c4bf8112975bc342e55ad728c926cf3c53f59d94f175f49d6\"" May 8 00:06:18.070399 containerd[1505]: time="2025-05-08T00:06:18.070367482Z" level=info msg="RemoveContainer for \"02724e46cfc8a59c4bf8112975bc342e55ad728c926cf3c53f59d94f175f49d6\" returns successfully" May 8 00:06:18.070518 kubelet[2612]: I0508 00:06:18.070500 2612 scope.go:117] "RemoveContainer" containerID="ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e" May 8 00:06:18.070693 containerd[1505]: time="2025-05-08T00:06:18.070648690Z" level=error msg="ContainerStatus for \"ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e\": not found" May 8 00:06:18.070834 kubelet[2612]: E0508 00:06:18.070795 2612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e\": not found" containerID="ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e" May 8 00:06:18.070871 kubelet[2612]: I0508 00:06:18.070839 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e"} err="failed to get container status \"ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee74b2fad1a6f79989dce0bdb1d4d6c3d22a7c5e3ea926b4a69bfaf82170359e\": not found" May 8 00:06:18.070871 kubelet[2612]: I0508 00:06:18.070869 2612 scope.go:117] "RemoveContainer" containerID="aefd2060e46c5cb292e78e3ee942881d33d156243a535a0274e14ab85042f1a4" May 8 00:06:18.071106 containerd[1505]: time="2025-05-08T00:06:18.071061180Z" level=error msg="ContainerStatus for \"aefd2060e46c5cb292e78e3ee942881d33d156243a535a0274e14ab85042f1a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aefd2060e46c5cb292e78e3ee942881d33d156243a535a0274e14ab85042f1a4\": not found" May 8 00:06:18.071256 kubelet[2612]: E0508 00:06:18.071226 2612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aefd2060e46c5cb292e78e3ee942881d33d156243a535a0274e14ab85042f1a4\": not found" containerID="aefd2060e46c5cb292e78e3ee942881d33d156243a535a0274e14ab85042f1a4" May 8 00:06:18.071298 kubelet[2612]: I0508 00:06:18.071260 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aefd2060e46c5cb292e78e3ee942881d33d156243a535a0274e14ab85042f1a4"} err="failed to get container status \"aefd2060e46c5cb292e78e3ee942881d33d156243a535a0274e14ab85042f1a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"aefd2060e46c5cb292e78e3ee942881d33d156243a535a0274e14ab85042f1a4\": not found" May 8 00:06:18.071298 kubelet[2612]: I0508 00:06:18.071290 2612 scope.go:117] "RemoveContainer" containerID="2ff1e4fb774681d04b5d3774a5ab0e46e236181cce921d0cb9212dce4fca0639" May 8 00:06:18.071468 containerd[1505]: time="2025-05-08T00:06:18.071438302Z" level=error msg="ContainerStatus for \"2ff1e4fb774681d04b5d3774a5ab0e46e236181cce921d0cb9212dce4fca0639\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ff1e4fb774681d04b5d3774a5ab0e46e236181cce921d0cb9212dce4fca0639\": not found" May 8 00:06:18.071571 kubelet[2612]: E0508 00:06:18.071551 2612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ff1e4fb774681d04b5d3774a5ab0e46e236181cce921d0cb9212dce4fca0639\": not found" containerID="2ff1e4fb774681d04b5d3774a5ab0e46e236181cce921d0cb9212dce4fca0639" May 8 00:06:18.071622 kubelet[2612]: I0508 00:06:18.071578 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ff1e4fb774681d04b5d3774a5ab0e46e236181cce921d0cb9212dce4fca0639"} err="failed to get container status \"2ff1e4fb774681d04b5d3774a5ab0e46e236181cce921d0cb9212dce4fca0639\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ff1e4fb774681d04b5d3774a5ab0e46e236181cce921d0cb9212dce4fca0639\": not found" May 8 00:06:18.071622 kubelet[2612]: I0508 00:06:18.071597 2612 scope.go:117] "RemoveContainer" containerID="3ad934a101c149ce91ae055e9a911248627949defd4886669565c61fb6934238" May 8 00:06:18.071775 containerd[1505]: time="2025-05-08T00:06:18.071745019Z" level=error msg="ContainerStatus for \"3ad934a101c149ce91ae055e9a911248627949defd4886669565c61fb6934238\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ad934a101c149ce91ae055e9a911248627949defd4886669565c61fb6934238\": not found" May 8 00:06:18.071896 kubelet[2612]: E0508 00:06:18.071872 2612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ad934a101c149ce91ae055e9a911248627949defd4886669565c61fb6934238\": not found" containerID="3ad934a101c149ce91ae055e9a911248627949defd4886669565c61fb6934238" May 8 00:06:18.071940 kubelet[2612]: I0508 00:06:18.071899 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ad934a101c149ce91ae055e9a911248627949defd4886669565c61fb6934238"} err="failed to get container status \"3ad934a101c149ce91ae055e9a911248627949defd4886669565c61fb6934238\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ad934a101c149ce91ae055e9a911248627949defd4886669565c61fb6934238\": not found" May 8 00:06:18.071940 kubelet[2612]: I0508 00:06:18.071916 2612 scope.go:117] "RemoveContainer" containerID="02724e46cfc8a59c4bf8112975bc342e55ad728c926cf3c53f59d94f175f49d6" May 8 00:06:18.072122 containerd[1505]: time="2025-05-08T00:06:18.072087715Z" level=error msg="ContainerStatus for \"02724e46cfc8a59c4bf8112975bc342e55ad728c926cf3c53f59d94f175f49d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"02724e46cfc8a59c4bf8112975bc342e55ad728c926cf3c53f59d94f175f49d6\": not found" May 8 00:06:18.072230 kubelet[2612]: E0508 00:06:18.072201 2612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"02724e46cfc8a59c4bf8112975bc342e55ad728c926cf3c53f59d94f175f49d6\": not found" containerID="02724e46cfc8a59c4bf8112975bc342e55ad728c926cf3c53f59d94f175f49d6" May 8 00:06:18.072276 kubelet[2612]: I0508 00:06:18.072229 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"02724e46cfc8a59c4bf8112975bc342e55ad728c926cf3c53f59d94f175f49d6"} err="failed to get container status \"02724e46cfc8a59c4bf8112975bc342e55ad728c926cf3c53f59d94f175f49d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"02724e46cfc8a59c4bf8112975bc342e55ad728c926cf3c53f59d94f175f49d6\": not found" May 8 00:06:18.778113 sshd[4317]: Connection closed by 10.0.0.1 port 49148 May 8 00:06:18.778744 sshd-session[4314]: pam_unix(sshd:session): session closed for user core May 8 00:06:18.791844 systemd[1]: sshd@26-10.0.0.54:22-10.0.0.1:49148.service: Deactivated successfully. May 8 00:06:18.793987 systemd[1]: session-27.scope: Deactivated successfully. May 8 00:06:18.795844 systemd-logind[1497]: Session 27 logged out. Waiting for processes to exit. May 8 00:06:18.801966 systemd[1]: Started sshd@27-10.0.0.54:22-10.0.0.1:41850.service - OpenSSH per-connection server daemon (10.0.0.1:41850). May 8 00:06:18.803000 systemd-logind[1497]: Removed session 27. May 8 00:06:18.845572 sshd[4478]: Accepted publickey for core from 10.0.0.1 port 41850 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:06:18.847558 sshd-session[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:06:18.853040 systemd-logind[1497]: New session 28 of user core. May 8 00:06:18.860919 systemd[1]: Started session-28.scope - Session 28 of User core. May 8 00:06:19.232850 sshd[4482]: Connection closed by 10.0.0.1 port 41850 May 8 00:06:19.234703 sshd-session[4478]: pam_unix(sshd:session): session closed for user core May 8 00:06:19.241454 kubelet[2612]: E0508 00:06:19.241403 2612 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da0c1ec8-5fcd-4576-9f6c-7096cf27fea1" containerName="clean-cilium-state" May 8 00:06:19.241454 kubelet[2612]: E0508 00:06:19.241436 2612 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da0c1ec8-5fcd-4576-9f6c-7096cf27fea1" containerName="mount-cgroup" May 8 00:06:19.241454 kubelet[2612]: E0508 00:06:19.241444 2612 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da0c1ec8-5fcd-4576-9f6c-7096cf27fea1" containerName="apply-sysctl-overwrites" May 8 00:06:19.241454 kubelet[2612]: E0508 00:06:19.241450 2612 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8" containerName="cilium-operator" May 8 00:06:19.241454 kubelet[2612]: E0508 00:06:19.241459 2612 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da0c1ec8-5fcd-4576-9f6c-7096cf27fea1" containerName="mount-bpf-fs" May 8 00:06:19.241454 kubelet[2612]: E0508 00:06:19.241465 2612 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da0c1ec8-5fcd-4576-9f6c-7096cf27fea1" containerName="cilium-agent" May 8 00:06:19.242036 kubelet[2612]: I0508 00:06:19.241490 2612 memory_manager.go:354] "RemoveStaleState removing state" podUID="da0c1ec8-5fcd-4576-9f6c-7096cf27fea1" containerName="cilium-agent" May 8 00:06:19.242036 kubelet[2612]: I0508 00:06:19.241497 2612 memory_manager.go:354] "RemoveStaleState removing state" podUID="67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8" containerName="cilium-operator" May 8 00:06:19.246782 systemd[1]: sshd@27-10.0.0.54:22-10.0.0.1:41850.service: Deactivated successfully. May 8 00:06:19.250774 systemd[1]: session-28.scope: Deactivated successfully. May 8 00:06:19.253243 systemd-logind[1497]: Session 28 logged out. Waiting for processes to exit. May 8 00:06:19.272085 systemd[1]: Started sshd@28-10.0.0.54:22-10.0.0.1:41858.service - OpenSSH per-connection server daemon (10.0.0.1:41858). May 8 00:06:19.274548 systemd-logind[1497]: Removed session 28. May 8 00:06:19.280726 systemd[1]: Created slice kubepods-burstable-pod9f8378d5_cff1_456a_91c1_69eb45f6608c.slice - libcontainer container kubepods-burstable-pod9f8378d5_cff1_456a_91c1_69eb45f6608c.slice. May 8 00:06:19.303451 kubelet[2612]: I0508 00:06:19.303412 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9f8378d5-cff1-456a-91c1-69eb45f6608c-cni-path\") pod \"cilium-52cg8\" (UID: \"9f8378d5-cff1-456a-91c1-69eb45f6608c\") " pod="kube-system/cilium-52cg8" May 8 00:06:19.303451 kubelet[2612]: I0508 00:06:19.303447 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f8378d5-cff1-456a-91c1-69eb45f6608c-cilium-config-path\") pod \"cilium-52cg8\" (UID: \"9f8378d5-cff1-456a-91c1-69eb45f6608c\") " pod="kube-system/cilium-52cg8" May 8 00:06:19.303451 kubelet[2612]: I0508 00:06:19.303467 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9f8378d5-cff1-456a-91c1-69eb45f6608c-bpf-maps\") pod \"cilium-52cg8\" (UID: \"9f8378d5-cff1-456a-91c1-69eb45f6608c\") " pod="kube-system/cilium-52cg8" May 8 00:06:19.303665 kubelet[2612]: I0508 00:06:19.303482 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tbf9\" (UniqueName: \"kubernetes.io/projected/9f8378d5-cff1-456a-91c1-69eb45f6608c-kube-api-access-9tbf9\") pod \"cilium-52cg8\" (UID: \"9f8378d5-cff1-456a-91c1-69eb45f6608c\") " pod="kube-system/cilium-52cg8" May 8 00:06:19.303665 kubelet[2612]: I0508 00:06:19.303500 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9f8378d5-cff1-456a-91c1-69eb45f6608c-hostproc\") pod \"cilium-52cg8\" (UID: \"9f8378d5-cff1-456a-91c1-69eb45f6608c\") " pod="kube-system/cilium-52cg8" May 8 00:06:19.303665 kubelet[2612]: I0508 00:06:19.303515 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f8378d5-cff1-456a-91c1-69eb45f6608c-etc-cni-netd\") pod \"cilium-52cg8\" (UID: \"9f8378d5-cff1-456a-91c1-69eb45f6608c\") " pod="kube-system/cilium-52cg8" May 8 00:06:19.303665 kubelet[2612]: I0508 00:06:19.303530 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f8378d5-cff1-456a-91c1-69eb45f6608c-lib-modules\") pod \"cilium-52cg8\" (UID: \"9f8378d5-cff1-456a-91c1-69eb45f6608c\") " pod="kube-system/cilium-52cg8" May 8 00:06:19.303665 kubelet[2612]: I0508 00:06:19.303546 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9f8378d5-cff1-456a-91c1-69eb45f6608c-cilium-run\") pod \"cilium-52cg8\" (UID: \"9f8378d5-cff1-456a-91c1-69eb45f6608c\") " pod="kube-system/cilium-52cg8" May 8 00:06:19.303665 kubelet[2612]: I0508 00:06:19.303615 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9f8378d5-cff1-456a-91c1-69eb45f6608c-cilium-ipsec-secrets\") pod \"cilium-52cg8\" (UID: \"9f8378d5-cff1-456a-91c1-69eb45f6608c\") " pod="kube-system/cilium-52cg8" May 8 00:06:19.303824 kubelet[2612]: I0508 00:06:19.303720 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9f8378d5-cff1-456a-91c1-69eb45f6608c-host-proc-sys-kernel\") pod \"cilium-52cg8\" (UID: \"9f8378d5-cff1-456a-91c1-69eb45f6608c\") " pod="kube-system/cilium-52cg8" May 8 00:06:19.303824 kubelet[2612]: I0508 00:06:19.303776 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9f8378d5-cff1-456a-91c1-69eb45f6608c-cilium-cgroup\") pod \"cilium-52cg8\" (UID: \"9f8378d5-cff1-456a-91c1-69eb45f6608c\") " pod="kube-system/cilium-52cg8" May 8 00:06:19.303824 kubelet[2612]: I0508 00:06:19.303793 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9f8378d5-cff1-456a-91c1-69eb45f6608c-host-proc-sys-net\") pod \"cilium-52cg8\" (UID: \"9f8378d5-cff1-456a-91c1-69eb45f6608c\") " pod="kube-system/cilium-52cg8" May 8 00:06:19.303824 kubelet[2612]: I0508 00:06:19.303806 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9f8378d5-cff1-456a-91c1-69eb45f6608c-hubble-tls\") pod \"cilium-52cg8\" (UID: \"9f8378d5-cff1-456a-91c1-69eb45f6608c\") " pod="kube-system/cilium-52cg8" May 8 00:06:19.303824 kubelet[2612]: I0508 00:06:19.303824 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f8378d5-cff1-456a-91c1-69eb45f6608c-xtables-lock\") pod \"cilium-52cg8\" (UID: \"9f8378d5-cff1-456a-91c1-69eb45f6608c\") " pod="kube-system/cilium-52cg8" May 8 00:06:19.303935 kubelet[2612]: I0508 00:06:19.303841 2612 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9f8378d5-cff1-456a-91c1-69eb45f6608c-clustermesh-secrets\") pod \"cilium-52cg8\" (UID: \"9f8378d5-cff1-456a-91c1-69eb45f6608c\") " pod="kube-system/cilium-52cg8" May 8 00:06:19.311244 sshd[4496]: Accepted publickey for core from 10.0.0.1 port 41858 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:06:19.312995 sshd-session[4496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:06:19.320372 systemd-logind[1497]: New session 29 of user core. May 8 00:06:19.327104 systemd[1]: Started session-29.scope - Session 29 of User core. May 8 00:06:19.379291 sshd[4499]: Connection closed by 10.0.0.1 port 41858 May 8 00:06:19.379764 sshd-session[4496]: pam_unix(sshd:session): session closed for user core May 8 00:06:19.392983 systemd[1]: sshd@28-10.0.0.54:22-10.0.0.1:41858.service: Deactivated successfully. May 8 00:06:19.395360 systemd[1]: session-29.scope: Deactivated successfully. May 8 00:06:19.397277 systemd-logind[1497]: Session 29 logged out. Waiting for processes to exit. May 8 00:06:19.412236 systemd[1]: Started sshd@29-10.0.0.54:22-10.0.0.1:41866.service - OpenSSH per-connection server daemon (10.0.0.1:41866). May 8 00:06:19.423194 systemd-logind[1497]: Removed session 29. May 8 00:06:19.449004 sshd[4506]: Accepted publickey for core from 10.0.0.1 port 41866 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:06:19.450640 sshd-session[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:06:19.455837 systemd-logind[1497]: New session 30 of user core. May 8 00:06:19.469815 systemd[1]: Started session-30.scope - Session 30 of User core. May 8 00:06:19.586995 containerd[1505]: time="2025-05-08T00:06:19.585551570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-52cg8,Uid:9f8378d5-cff1-456a-91c1-69eb45f6608c,Namespace:kube-system,Attempt:0,}" May 8 00:06:19.819568 containerd[1505]: time="2025-05-08T00:06:19.819359685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:06:19.820324 containerd[1505]: time="2025-05-08T00:06:19.820242617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:06:19.820324 containerd[1505]: time="2025-05-08T00:06:19.820287433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:06:19.820484 containerd[1505]: time="2025-05-08T00:06:19.820391402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:06:19.846822 systemd[1]: Started cri-containerd-cc48044e3847fc4ce96a75bf4bc49f6293ecda6b151f39d638af498f44a6634c.scope - libcontainer container cc48044e3847fc4ce96a75bf4bc49f6293ecda6b151f39d638af498f44a6634c. May 8 00:06:19.873057 containerd[1505]: time="2025-05-08T00:06:19.873001613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-52cg8,Uid:9f8378d5-cff1-456a-91c1-69eb45f6608c,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc48044e3847fc4ce96a75bf4bc49f6293ecda6b151f39d638af498f44a6634c\"" May 8 00:06:19.875839 containerd[1505]: time="2025-05-08T00:06:19.875791607Z" level=info msg="CreateContainer within sandbox \"cc48044e3847fc4ce96a75bf4bc49f6293ecda6b151f39d638af498f44a6634c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:06:19.892120 containerd[1505]: time="2025-05-08T00:06:19.892069379Z" level=info msg="CreateContainer within sandbox \"cc48044e3847fc4ce96a75bf4bc49f6293ecda6b151f39d638af498f44a6634c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"685384c70e5c8aaea5132ff42fbad03a5f3d2a7aad917032912b261397842609\"" May 8 00:06:19.892511 containerd[1505]: time="2025-05-08T00:06:19.892472150Z" level=info msg="StartContainer for \"685384c70e5c8aaea5132ff42fbad03a5f3d2a7aad917032912b261397842609\"" May 8 00:06:19.901310 kubelet[2612]: I0508 00:06:19.901242 2612 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8" path="/var/lib/kubelet/pods/67b31f7a-3aa4-4c11-9f6a-4d38877cc5e8/volumes" May 8 00:06:19.901904 kubelet[2612]: I0508 00:06:19.901878 2612 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da0c1ec8-5fcd-4576-9f6c-7096cf27fea1" path="/var/lib/kubelet/pods/da0c1ec8-5fcd-4576-9f6c-7096cf27fea1/volumes" May 8 00:06:19.924829 systemd[1]: Started cri-containerd-685384c70e5c8aaea5132ff42fbad03a5f3d2a7aad917032912b261397842609.scope - libcontainer container 685384c70e5c8aaea5132ff42fbad03a5f3d2a7aad917032912b261397842609. May 8 00:06:19.951278 containerd[1505]: time="2025-05-08T00:06:19.951216932Z" level=info msg="StartContainer for \"685384c70e5c8aaea5132ff42fbad03a5f3d2a7aad917032912b261397842609\" returns successfully" May 8 00:06:19.963155 systemd[1]: cri-containerd-685384c70e5c8aaea5132ff42fbad03a5f3d2a7aad917032912b261397842609.scope: Deactivated successfully. May 8 00:06:19.999207 containerd[1505]: time="2025-05-08T00:06:19.999122771Z" level=info msg="shim disconnected" id=685384c70e5c8aaea5132ff42fbad03a5f3d2a7aad917032912b261397842609 namespace=k8s.io May 8 00:06:19.999207 containerd[1505]: time="2025-05-08T00:06:19.999176935Z" level=warning msg="cleaning up after shim disconnected" id=685384c70e5c8aaea5132ff42fbad03a5f3d2a7aad917032912b261397842609 namespace=k8s.io May 8 00:06:19.999207 containerd[1505]: time="2025-05-08T00:06:19.999185752Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:06:20.869370 containerd[1505]: time="2025-05-08T00:06:20.869321986Z" level=info msg="CreateContainer within sandbox \"cc48044e3847fc4ce96a75bf4bc49f6293ecda6b151f39d638af498f44a6634c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:06:20.884781 containerd[1505]: time="2025-05-08T00:06:20.884644333Z" level=info msg="CreateContainer within sandbox \"cc48044e3847fc4ce96a75bf4bc49f6293ecda6b151f39d638af498f44a6634c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bb3d1496ff7f25c5781235facdd2efc55059bce2aede822bb53fe9d6ccc141d9\"" May 8 00:06:20.885270 containerd[1505]: time="2025-05-08T00:06:20.885233382Z" level=info msg="StartContainer for \"bb3d1496ff7f25c5781235facdd2efc55059bce2aede822bb53fe9d6ccc141d9\"" May 8 00:06:20.915846 systemd[1]: Started cri-containerd-bb3d1496ff7f25c5781235facdd2efc55059bce2aede822bb53fe9d6ccc141d9.scope - libcontainer container bb3d1496ff7f25c5781235facdd2efc55059bce2aede822bb53fe9d6ccc141d9. May 8 00:06:20.942473 containerd[1505]: time="2025-05-08T00:06:20.942420118Z" level=info msg="StartContainer for \"bb3d1496ff7f25c5781235facdd2efc55059bce2aede822bb53fe9d6ccc141d9\" returns successfully" May 8 00:06:20.950909 systemd[1]: cri-containerd-bb3d1496ff7f25c5781235facdd2efc55059bce2aede822bb53fe9d6ccc141d9.scope: Deactivated successfully. May 8 00:06:20.975272 containerd[1505]: time="2025-05-08T00:06:20.975185886Z" level=info msg="shim disconnected" id=bb3d1496ff7f25c5781235facdd2efc55059bce2aede822bb53fe9d6ccc141d9 namespace=k8s.io May 8 00:06:20.975272 containerd[1505]: time="2025-05-08T00:06:20.975241142Z" level=warning msg="cleaning up after shim disconnected" id=bb3d1496ff7f25c5781235facdd2efc55059bce2aede822bb53fe9d6ccc141d9 namespace=k8s.io May 8 00:06:20.975272 containerd[1505]: time="2025-05-08T00:06:20.975255901Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:06:21.414477 systemd[1]: run-containerd-runc-k8s.io-bb3d1496ff7f25c5781235facdd2efc55059bce2aede822bb53fe9d6ccc141d9-runc.PDRRr4.mount: Deactivated successfully. May 8 00:06:21.414589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb3d1496ff7f25c5781235facdd2efc55059bce2aede822bb53fe9d6ccc141d9-rootfs.mount: Deactivated successfully. May 8 00:06:21.873855 containerd[1505]: time="2025-05-08T00:06:21.873794882Z" level=info msg="CreateContainer within sandbox \"cc48044e3847fc4ce96a75bf4bc49f6293ecda6b151f39d638af498f44a6634c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:06:21.895097 containerd[1505]: time="2025-05-08T00:06:21.895035333Z" level=info msg="CreateContainer within sandbox \"cc48044e3847fc4ce96a75bf4bc49f6293ecda6b151f39d638af498f44a6634c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cbd3618978e2a664b317e7647f06a63ae48e5de85bde8a4cfc11e9c6f3fa05a7\"" May 8 00:06:21.896591 containerd[1505]: time="2025-05-08T00:06:21.896152555Z" level=info msg="StartContainer for \"cbd3618978e2a664b317e7647f06a63ae48e5de85bde8a4cfc11e9c6f3fa05a7\"" May 8 00:06:21.930950 systemd[1]: Started cri-containerd-cbd3618978e2a664b317e7647f06a63ae48e5de85bde8a4cfc11e9c6f3fa05a7.scope - libcontainer container cbd3618978e2a664b317e7647f06a63ae48e5de85bde8a4cfc11e9c6f3fa05a7. May 8 00:06:21.967015 containerd[1505]: time="2025-05-08T00:06:21.966894709Z" level=info msg="StartContainer for \"cbd3618978e2a664b317e7647f06a63ae48e5de85bde8a4cfc11e9c6f3fa05a7\" returns successfully" May 8 00:06:21.970623 systemd[1]: cri-containerd-cbd3618978e2a664b317e7647f06a63ae48e5de85bde8a4cfc11e9c6f3fa05a7.scope: Deactivated successfully. May 8 00:06:21.982120 kubelet[2612]: E0508 00:06:21.982077 2612 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:06:22.006321 containerd[1505]: time="2025-05-08T00:06:22.006242727Z" level=info msg="shim disconnected" id=cbd3618978e2a664b317e7647f06a63ae48e5de85bde8a4cfc11e9c6f3fa05a7 namespace=k8s.io May 8 00:06:22.006321 containerd[1505]: time="2025-05-08T00:06:22.006312551Z" level=warning msg="cleaning up after shim disconnected" id=cbd3618978e2a664b317e7647f06a63ae48e5de85bde8a4cfc11e9c6f3fa05a7 namespace=k8s.io May 8 00:06:22.006321 containerd[1505]: time="2025-05-08T00:06:22.006321598Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:06:22.414755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbd3618978e2a664b317e7647f06a63ae48e5de85bde8a4cfc11e9c6f3fa05a7-rootfs.mount: Deactivated successfully. May 8 00:06:22.877540 containerd[1505]: time="2025-05-08T00:06:22.877482686Z" level=info msg="CreateContainer within sandbox \"cc48044e3847fc4ce96a75bf4bc49f6293ecda6b151f39d638af498f44a6634c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:06:23.153575 containerd[1505]: time="2025-05-08T00:06:23.153437802Z" level=info msg="CreateContainer within sandbox \"cc48044e3847fc4ce96a75bf4bc49f6293ecda6b151f39d638af498f44a6634c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"679080ab1054e326c8342282c27bd894ba8c2fd35c134fe26eb3abbf75a7e7ca\"" May 8 00:06:23.154264 containerd[1505]: time="2025-05-08T00:06:23.154229682Z" level=info msg="StartContainer for \"679080ab1054e326c8342282c27bd894ba8c2fd35c134fe26eb3abbf75a7e7ca\"" May 8 00:06:23.187845 systemd[1]: Started cri-containerd-679080ab1054e326c8342282c27bd894ba8c2fd35c134fe26eb3abbf75a7e7ca.scope - libcontainer container 679080ab1054e326c8342282c27bd894ba8c2fd35c134fe26eb3abbf75a7e7ca. May 8 00:06:23.214440 systemd[1]: cri-containerd-679080ab1054e326c8342282c27bd894ba8c2fd35c134fe26eb3abbf75a7e7ca.scope: Deactivated successfully. May 8 00:06:23.358717 containerd[1505]: time="2025-05-08T00:06:23.358651249Z" level=info msg="StartContainer for \"679080ab1054e326c8342282c27bd894ba8c2fd35c134fe26eb3abbf75a7e7ca\" returns successfully" May 8 00:06:23.414701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-679080ab1054e326c8342282c27bd894ba8c2fd35c134fe26eb3abbf75a7e7ca-rootfs.mount: Deactivated successfully. May 8 00:06:23.714780 containerd[1505]: time="2025-05-08T00:06:23.714280096Z" level=info msg="shim disconnected" id=679080ab1054e326c8342282c27bd894ba8c2fd35c134fe26eb3abbf75a7e7ca namespace=k8s.io May 8 00:06:23.714780 containerd[1505]: time="2025-05-08T00:06:23.714351754Z" level=warning msg="cleaning up after shim disconnected" id=679080ab1054e326c8342282c27bd894ba8c2fd35c134fe26eb3abbf75a7e7ca namespace=k8s.io May 8 00:06:23.714780 containerd[1505]: time="2025-05-08T00:06:23.714362054Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:06:23.897620 containerd[1505]: time="2025-05-08T00:06:23.897576042Z" level=info msg="CreateContainer within sandbox \"cc48044e3847fc4ce96a75bf4bc49f6293ecda6b151f39d638af498f44a6634c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:06:24.063564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4278703654.mount: Deactivated successfully. May 8 00:06:24.140252 containerd[1505]: time="2025-05-08T00:06:24.140201601Z" level=info msg="CreateContainer within sandbox \"cc48044e3847fc4ce96a75bf4bc49f6293ecda6b151f39d638af498f44a6634c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"84add83ebc3aacf0de2c94b246a5b0b18c233c9f34f9865b87eaf4244f40c76e\"" May 8 00:06:24.141852 containerd[1505]: time="2025-05-08T00:06:24.140760544Z" level=info msg="StartContainer for \"84add83ebc3aacf0de2c94b246a5b0b18c233c9f34f9865b87eaf4244f40c76e\"" May 8 00:06:24.176875 systemd[1]: Started cri-containerd-84add83ebc3aacf0de2c94b246a5b0b18c233c9f34f9865b87eaf4244f40c76e.scope - libcontainer container 84add83ebc3aacf0de2c94b246a5b0b18c233c9f34f9865b87eaf4244f40c76e. May 8 00:06:24.213390 containerd[1505]: time="2025-05-08T00:06:24.213340681Z" level=info msg="StartContainer for \"84add83ebc3aacf0de2c94b246a5b0b18c233c9f34f9865b87eaf4244f40c76e\" returns successfully" May 8 00:06:24.543200 kubelet[2612]: I0508 00:06:24.543144 2612 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T00:06:24Z","lastTransitionTime":"2025-05-08T00:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 00:06:24.665716 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 8 00:06:27.911291 systemd-networkd[1438]: lxc_health: Link UP May 8 00:06:27.921867 systemd-networkd[1438]: lxc_health: Gained carrier May 8 00:06:29.272952 systemd-networkd[1438]: lxc_health: Gained IPv6LL May 8 00:06:29.790985 kubelet[2612]: I0508 00:06:29.790923 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-52cg8" podStartSLOduration=10.790905813 podStartE2EDuration="10.790905813s" podCreationTimestamp="2025-05-08 00:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:06:24.935482318 +0000 UTC m=+103.121707523" watchObservedRunningTime="2025-05-08 00:06:29.790905813 +0000 UTC m=+107.977130998" May 8 00:06:34.370535 sshd[4513]: Connection closed by 10.0.0.1 port 41866 May 8 00:06:34.372050 sshd-session[4506]: pam_unix(sshd:session): session closed for user core May 8 00:06:34.376600 systemd[1]: sshd@29-10.0.0.54:22-10.0.0.1:41866.service: Deactivated successfully. May 8 00:06:34.378971 systemd[1]: session-30.scope: Deactivated successfully. May 8 00:06:34.379798 systemd-logind[1497]: Session 30 logged out. Waiting for processes to exit. May 8 00:06:34.380667 systemd-logind[1497]: Removed session 30.