May 8 00:01:44.905301 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:19:27 -00 2025 May 8 00:01:44.905338 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:01:44.905354 kernel: BIOS-provided physical RAM map: May 8 00:01:44.905364 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 8 00:01:44.905373 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 8 00:01:44.905381 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 8 00:01:44.905392 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 8 00:01:44.905402 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 8 00:01:44.905411 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 8 00:01:44.905419 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 8 00:01:44.905429 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 8 00:01:44.905442 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 8 00:01:44.905454 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 8 00:01:44.905464 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 8 00:01:44.905477 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 8 00:01:44.905488 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 8 00:01:44.905502 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 8 00:01:44.905512 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 8 00:01:44.905521 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 8 00:01:44.905531 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 8 00:01:44.905540 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 8 00:01:44.905550 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 8 00:01:44.905559 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 8 00:01:44.905569 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 8 00:01:44.905578 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 8 00:01:44.905587 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 8 00:01:44.905597 kernel: NX (Execute Disable) protection: active May 8 00:01:44.905612 kernel: APIC: Static calls initialized May 8 00:01:44.905621 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 8 00:01:44.905631 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 8 00:01:44.905641 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 8 00:01:44.905650 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 8 00:01:44.905659 kernel: extended physical RAM map: May 8 00:01:44.905669 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 8 00:01:44.905688 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 8 00:01:44.905709 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 8 00:01:44.905731 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 8 00:01:44.905752 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 8 00:01:44.905771 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 8 00:01:44.905801 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 8 00:01:44.905834 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable May 8 00:01:44.905844 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable May 8 00:01:44.905854 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable May 8 00:01:44.905864 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable May 8 00:01:44.905892 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable May 8 00:01:44.905912 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 8 00:01:44.905923 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 8 00:01:44.905933 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 8 00:01:44.905943 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 8 00:01:44.905953 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 8 00:01:44.905963 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 8 00:01:44.905973 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 8 00:01:44.905983 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 8 00:01:44.905993 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 8 00:01:44.906008 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 8 00:01:44.906018 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 8 00:01:44.906029 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 8 00:01:44.906039 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 8 00:01:44.906052 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 8 00:01:44.906063 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 8 00:01:44.906072 kernel: efi: EFI v2.7 by EDK II May 8 00:01:44.906083 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 May 8 00:01:44.906093 kernel: random: crng init done May 8 00:01:44.906124 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 8 00:01:44.906136 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 8 00:01:44.906164 kernel: secureboot: Secure boot disabled May 8 00:01:44.906183 kernel: SMBIOS 2.8 present. May 8 00:01:44.906193 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 8 00:01:44.906203 kernel: Hypervisor detected: KVM May 8 00:01:44.906213 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 00:01:44.906223 kernel: kvm-clock: using sched offset of 4269541640 cycles May 8 00:01:44.906234 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 00:01:44.906245 kernel: tsc: Detected 2794.748 MHz processor May 8 00:01:44.906255 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:01:44.906266 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:01:44.906276 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 8 00:01:44.906292 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 8 00:01:44.906303 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:01:44.906313 kernel: Using GB pages for direct mapping May 8 00:01:44.906323 kernel: ACPI: Early table checksum verification disabled May 8 00:01:44.906334 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 8 00:01:44.906345 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 8 00:01:44.906355 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:01:44.906366 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:01:44.906376 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 8 00:01:44.906390 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:01:44.906400 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:01:44.906410 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:01:44.906420 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:01:44.906430 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 8 00:01:44.906440 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 8 00:01:44.906450 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 8 00:01:44.906460 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 8 00:01:44.906470 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 8 00:01:44.906484 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 8 00:01:44.906494 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 8 00:01:44.906504 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 8 00:01:44.906514 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 8 00:01:44.906524 kernel: No NUMA configuration found May 8 00:01:44.906534 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 8 00:01:44.906544 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] May 8 00:01:44.906554 kernel: Zone ranges: May 8 00:01:44.906564 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:01:44.906578 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 8 00:01:44.906588 kernel: Normal empty May 8 00:01:44.906602 kernel: Movable zone start for each node May 8 00:01:44.906612 kernel: Early memory node ranges May 8 00:01:44.906623 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 8 00:01:44.906632 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 8 00:01:44.906643 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 8 00:01:44.906652 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 8 00:01:44.906662 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 8 00:01:44.906672 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 8 00:01:44.906686 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] May 8 00:01:44.906696 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] May 8 00:01:44.906706 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 8 00:01:44.906716 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:01:44.906726 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 8 00:01:44.906747 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 8 00:01:44.906762 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:01:44.906772 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 8 00:01:44.906782 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 8 00:01:44.906793 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 8 00:01:44.906807 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 8 00:01:44.906817 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 8 00:01:44.906832 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 00:01:44.906842 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 00:01:44.906853 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 00:01:44.906864 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 00:01:44.906894 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 00:01:44.906910 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 00:01:44.906920 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 00:01:44.906930 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 00:01:44.906941 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:01:44.906951 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 00:01:44.906961 kernel: TSC deadline timer available May 8 00:01:44.906972 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 8 00:01:44.906982 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 8 00:01:44.906993 kernel: kvm-guest: KVM setup pv remote TLB flush May 8 00:01:44.907007 kernel: kvm-guest: setup PV sched yield May 8 00:01:44.907017 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 8 00:01:44.907028 kernel: Booting paravirtualized kernel on KVM May 8 00:01:44.907038 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:01:44.907049 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 8 00:01:44.907059 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 May 8 00:01:44.907069 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 May 8 00:01:44.907079 kernel: pcpu-alloc: [0] 0 1 2 3 May 8 00:01:44.907088 kernel: kvm-guest: PV spinlocks enabled May 8 00:01:44.907113 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 8 00:01:44.907124 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:01:44.907134 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:01:44.907144 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:01:44.907157 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:01:44.907167 kernel: Fallback order for Node 0: 0 May 8 00:01:44.907177 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 May 8 00:01:44.907186 kernel: Policy zone: DMA32 May 8 00:01:44.907200 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:01:44.907210 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 177824K reserved, 0K cma-reserved) May 8 00:01:44.907220 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:01:44.907230 kernel: ftrace: allocating 37918 entries in 149 pages May 8 00:01:44.907240 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:01:44.907250 kernel: Dynamic Preempt: voluntary May 8 00:01:44.907260 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:01:44.907272 kernel: rcu: RCU event tracing is enabled. May 8 00:01:44.907282 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:01:44.907307 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:01:44.907329 kernel: Rude variant of Tasks RCU enabled. May 8 00:01:44.907358 kernel: Tracing variant of Tasks RCU enabled. May 8 00:01:44.907379 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:01:44.907390 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:01:44.907401 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 8 00:01:44.907412 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:01:44.907423 kernel: Console: colour dummy device 80x25 May 8 00:01:44.907433 kernel: printk: console [ttyS0] enabled May 8 00:01:44.907449 kernel: ACPI: Core revision 20230628 May 8 00:01:44.907465 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 8 00:01:44.907476 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:01:44.907487 kernel: x2apic enabled May 8 00:01:44.907498 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:01:44.907513 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 8 00:01:44.907524 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 8 00:01:44.907535 kernel: kvm-guest: setup PV IPIs May 8 00:01:44.907546 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:01:44.907562 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 00:01:44.907573 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 8 00:01:44.907584 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 8 00:01:44.907594 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 8 00:01:44.907605 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 8 00:01:44.907615 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:01:44.907626 kernel: Spectre V2 : Mitigation: Retpolines May 8 00:01:44.907637 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:01:44.907648 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 00:01:44.907664 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 8 00:01:44.907674 kernel: RETBleed: Mitigation: untrained return thunk May 8 00:01:44.907685 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:01:44.907696 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:01:44.907707 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 8 00:01:44.907719 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 8 00:01:44.907733 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 8 00:01:44.907745 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:01:44.907760 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:01:44.907771 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:01:44.907782 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:01:44.907793 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 8 00:01:44.907804 kernel: Freeing SMP alternatives memory: 32K May 8 00:01:44.907815 kernel: pid_max: default: 32768 minimum: 301 May 8 00:01:44.907826 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:01:44.907837 kernel: landlock: Up and running. May 8 00:01:44.907847 kernel: SELinux: Initializing. May 8 00:01:44.907863 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:01:44.907893 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:01:44.907905 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 8 00:01:44.907916 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:01:44.907927 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:01:44.907938 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:01:44.907949 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 8 00:01:44.907960 kernel: ... version: 0 May 8 00:01:44.907971 kernel: ... bit width: 48 May 8 00:01:44.907987 kernel: ... generic registers: 6 May 8 00:01:44.907998 kernel: ... value mask: 0000ffffffffffff May 8 00:01:44.908008 kernel: ... max period: 00007fffffffffff May 8 00:01:44.908019 kernel: ... fixed-purpose events: 0 May 8 00:01:44.908030 kernel: ... event mask: 000000000000003f May 8 00:01:44.908041 kernel: signal: max sigframe size: 1776 May 8 00:01:44.908051 kernel: rcu: Hierarchical SRCU implementation. May 8 00:01:44.908062 kernel: rcu: Max phase no-delay instances is 400. May 8 00:01:44.908073 kernel: smp: Bringing up secondary CPUs ... May 8 00:01:44.908088 kernel: smpboot: x86: Booting SMP configuration: May 8 00:01:44.908099 kernel: .... node #0, CPUs: #1 #2 #3 May 8 00:01:44.908119 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:01:44.908130 kernel: smpboot: Max logical packages: 1 May 8 00:01:44.908141 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 8 00:01:44.908151 kernel: devtmpfs: initialized May 8 00:01:44.908162 kernel: x86/mm: Memory block size: 128MB May 8 00:01:44.908173 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 8 00:01:44.908184 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 8 00:01:44.908200 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 8 00:01:44.908211 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 8 00:01:44.908222 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) May 8 00:01:44.908233 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 8 00:01:44.908254 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:01:44.908277 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:01:44.908299 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:01:44.908310 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:01:44.908321 kernel: audit: initializing netlink subsys (disabled) May 8 00:01:44.908341 kernel: audit: type=2000 audit(1746662503.816:1): state=initialized audit_enabled=0 res=1 May 8 00:01:44.908352 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:01:44.908368 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:01:44.908379 kernel: cpuidle: using governor menu May 8 00:01:44.908390 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:01:44.908401 kernel: dca service started, version 1.12.1 May 8 00:01:44.908412 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 8 00:01:44.908422 kernel: PCI: Using configuration type 1 for base access May 8 00:01:44.908434 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:01:44.908450 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:01:44.908461 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:01:44.908472 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:01:44.908482 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:01:44.908493 kernel: ACPI: Added _OSI(Module Device) May 8 00:01:44.908504 kernel: ACPI: Added _OSI(Processor Device) May 8 00:01:44.908515 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:01:44.908526 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:01:44.908536 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:01:44.908551 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:01:44.908562 kernel: ACPI: Interpreter enabled May 8 00:01:44.908573 kernel: ACPI: PM: (supports S0 S3 S5) May 8 00:01:44.908583 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:01:44.908594 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:01:44.908605 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:01:44.908616 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 8 00:01:44.908627 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:01:44.908931 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:01:44.909135 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 8 00:01:44.909311 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 8 00:01:44.909330 kernel: PCI host bridge to bus 0000:00 May 8 00:01:44.909501 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:01:44.909643 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 00:01:44.909790 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:01:44.909976 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 8 00:01:44.910141 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 8 00:01:44.910303 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 8 00:01:44.910462 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:01:44.910655 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 8 00:01:44.910848 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 8 00:01:44.911046 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 8 00:01:44.911239 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 8 00:01:44.911416 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 8 00:01:44.911587 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 8 00:01:44.911759 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:01:44.911982 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:01:44.912172 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 8 00:01:44.912358 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 8 00:01:44.912528 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] May 8 00:01:44.912712 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 8 00:01:44.912957 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 8 00:01:44.913151 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 8 00:01:44.913322 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] May 8 00:01:44.913524 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 8 00:01:44.913708 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 8 00:01:44.913926 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 8 00:01:44.914124 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] May 8 00:01:44.914307 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 8 00:01:44.915956 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 8 00:01:44.916157 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 8 00:01:44.916358 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 8 00:01:44.916541 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 8 00:01:44.916717 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 8 00:01:44.917054 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 8 00:01:44.917254 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 8 00:01:44.917272 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 00:01:44.917283 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 00:01:44.917293 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:01:44.917310 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 00:01:44.917320 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 8 00:01:44.917332 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 8 00:01:44.917344 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 8 00:01:44.917357 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 8 00:01:44.917367 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 8 00:01:44.917378 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 8 00:01:44.917388 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 8 00:01:44.917399 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 8 00:01:44.917414 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 8 00:01:44.917425 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 8 00:01:44.917436 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 8 00:01:44.917446 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 8 00:01:44.917457 kernel: iommu: Default domain type: Translated May 8 00:01:44.917467 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:01:44.917478 kernel: efivars: Registered efivars operations May 8 00:01:44.919012 kernel: PCI: Using ACPI for IRQ routing May 8 00:01:44.919027 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:01:44.919044 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 8 00:01:44.919055 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 8 00:01:44.919065 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] May 8 00:01:44.919076 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] May 8 00:01:44.919087 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 8 00:01:44.919098 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 8 00:01:44.919119 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] May 8 00:01:44.919130 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 8 00:01:44.919312 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 8 00:01:44.919495 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 8 00:01:44.919668 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:01:44.919684 kernel: vgaarb: loaded May 8 00:01:44.919695 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 8 00:01:44.919705 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 8 00:01:44.919716 kernel: clocksource: Switched to clocksource kvm-clock May 8 00:01:44.919726 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:01:44.919737 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:01:44.919748 kernel: pnp: PnP ACPI init May 8 00:01:44.919983 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 8 00:01:44.920004 kernel: pnp: PnP ACPI: found 6 devices May 8 00:01:44.920015 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:01:44.920026 kernel: NET: Registered PF_INET protocol family May 8 00:01:44.920063 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:01:44.920078 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:01:44.920089 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:01:44.920110 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:01:44.920126 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:01:44.920137 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:01:44.920148 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:01:44.920159 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:01:44.920169 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:01:44.920180 kernel: NET: Registered PF_XDP protocol family May 8 00:01:44.920359 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 8 00:01:44.920536 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 8 00:01:44.920704 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 00:01:44.920850 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 00:01:44.921025 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 00:01:44.921193 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 8 00:01:44.921355 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 8 00:01:44.921509 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 8 00:01:44.921524 kernel: PCI: CLS 0 bytes, default 64 May 8 00:01:44.921535 kernel: Initialise system trusted keyrings May 8 00:01:44.921552 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:01:44.921563 kernel: Key type asymmetric registered May 8 00:01:44.921573 kernel: Asymmetric key parser 'x509' registered May 8 00:01:44.921583 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:01:44.921594 kernel: io scheduler mq-deadline registered May 8 00:01:44.921605 kernel: io scheduler kyber registered May 8 00:01:44.921616 kernel: io scheduler bfq registered May 8 00:01:44.921628 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:01:44.921640 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 8 00:01:44.921656 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 8 00:01:44.921671 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 8 00:01:44.921683 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:01:44.921695 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:01:44.921708 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 00:01:44.921720 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:01:44.921737 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:01:44.921943 kernel: rtc_cmos 00:04: RTC can wake from S4 May 8 00:01:44.921962 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:01:44.922134 kernel: rtc_cmos 00:04: registered as rtc0 May 8 00:01:44.922293 kernel: rtc_cmos 00:04: setting system clock to 2025-05-08T00:01:44 UTC (1746662504) May 8 00:01:44.922458 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 8 00:01:44.922475 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 8 00:01:44.922486 kernel: efifb: probing for efifb May 8 00:01:44.922503 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 8 00:01:44.922515 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 8 00:01:44.922526 kernel: efifb: scrolling: redraw May 8 00:01:44.922537 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 8 00:01:44.922548 kernel: Console: switching to colour frame buffer device 160x50 May 8 00:01:44.922558 kernel: fb0: EFI VGA frame buffer device May 8 00:01:44.922570 kernel: pstore: Using crash dump compression: deflate May 8 00:01:44.922581 kernel: pstore: Registered efi_pstore as persistent store backend May 8 00:01:44.922592 kernel: NET: Registered PF_INET6 protocol family May 8 00:01:44.922608 kernel: Segment Routing with IPv6 May 8 00:01:44.922619 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:01:44.922630 kernel: NET: Registered PF_PACKET protocol family May 8 00:01:44.922645 kernel: Key type dns_resolver registered May 8 00:01:44.922656 kernel: IPI shorthand broadcast: enabled May 8 00:01:44.922667 kernel: sched_clock: Marking stable (1657003362, 320288522)->(2142503910, -165212026) May 8 00:01:44.922678 kernel: registered taskstats version 1 May 8 00:01:44.922689 kernel: Loading compiled-in X.509 certificates May 8 00:01:44.922700 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: dac8423f6f9fa2fb5f636925d45d7c2572b3a9b6' May 8 00:01:44.922715 kernel: Key type .fscrypt registered May 8 00:01:44.922726 kernel: Key type fscrypt-provisioning registered May 8 00:01:44.922738 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:01:44.922749 kernel: ima: Allocated hash algorithm: sha1 May 8 00:01:44.922760 kernel: ima: No architecture policies found May 8 00:01:44.922771 kernel: clk: Disabling unused clocks May 8 00:01:44.922782 kernel: Freeing unused kernel image (initmem) memory: 43484K May 8 00:01:44.922793 kernel: Write protecting the kernel read-only data: 38912k May 8 00:01:44.922805 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 8 00:01:44.922820 kernel: Run /init as init process May 8 00:01:44.922831 kernel: with arguments: May 8 00:01:44.922842 kernel: /init May 8 00:01:44.922853 kernel: with environment: May 8 00:01:44.922864 kernel: HOME=/ May 8 00:01:44.922892 kernel: TERM=linux May 8 00:01:44.922904 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:01:44.922917 systemd[1]: Successfully made /usr/ read-only. May 8 00:01:44.922938 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:01:44.922950 systemd[1]: Detected virtualization kvm. May 8 00:01:44.922962 systemd[1]: Detected architecture x86-64. May 8 00:01:44.922973 systemd[1]: Running in initrd. May 8 00:01:44.922984 systemd[1]: No hostname configured, using default hostname. May 8 00:01:44.922997 systemd[1]: Hostname set to . May 8 00:01:44.923009 systemd[1]: Initializing machine ID from VM UUID. May 8 00:01:44.923021 systemd[1]: Queued start job for default target initrd.target. May 8 00:01:44.923037 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:01:44.923049 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:01:44.923061 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:01:44.923073 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:01:44.923085 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:01:44.923098 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:01:44.923121 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:01:44.923137 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:01:44.923148 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:01:44.923160 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:01:44.923171 systemd[1]: Reached target paths.target - Path Units. May 8 00:01:44.923182 systemd[1]: Reached target slices.target - Slice Units. May 8 00:01:44.923194 systemd[1]: Reached target swap.target - Swaps. May 8 00:01:44.923205 systemd[1]: Reached target timers.target - Timer Units. May 8 00:01:44.923216 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:01:44.923231 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:01:44.923243 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:01:44.923254 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 8 00:01:44.923266 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:01:44.923277 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:01:44.923289 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:01:44.923301 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:01:44.923313 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:01:44.923325 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:01:44.923341 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:01:44.923353 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:01:44.923363 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:01:44.923372 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:01:44.923380 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:01:44.923390 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:01:44.923401 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:01:44.923417 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:01:44.923465 systemd-journald[194]: Collecting audit messages is disabled. May 8 00:01:44.923491 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:01:44.923500 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:01:44.923509 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:01:44.923518 systemd-journald[194]: Journal started May 8 00:01:44.923537 systemd-journald[194]: Runtime Journal (/run/log/journal/0e5e622c0a0543138909151232124c84) is 6M, max 48.2M, 42.2M free. May 8 00:01:44.906191 systemd-modules-load[195]: Inserted module 'overlay' May 8 00:01:44.932571 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:01:44.936002 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:01:44.938367 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:01:44.943544 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:01:44.946320 kernel: Bridge firewalling registered May 8 00:01:44.944927 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:01:44.944988 systemd-modules-load[195]: Inserted module 'br_netfilter' May 8 00:01:44.945306 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:01:44.946604 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:01:44.949565 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:01:44.953061 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:01:44.956082 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:01:44.960796 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:01:44.969234 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:01:44.973988 dracut-cmdline[224]: dracut-dracut-053 May 8 00:01:44.982146 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:01:44.982048 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:01:45.022576 systemd-resolved[237]: Positive Trust Anchors: May 8 00:01:45.022593 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:01:45.022623 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:01:45.033463 systemd-resolved[237]: Defaulting to hostname 'linux'. May 8 00:01:45.035538 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:01:45.036745 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:01:45.064905 kernel: SCSI subsystem initialized May 8 00:01:45.074903 kernel: Loading iSCSI transport class v2.0-870. May 8 00:01:45.087921 kernel: iscsi: registered transport (tcp) May 8 00:01:45.115080 kernel: iscsi: registered transport (qla4xxx) May 8 00:01:45.115182 kernel: QLogic iSCSI HBA Driver May 8 00:01:45.168447 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:01:45.189126 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:01:45.220925 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:01:45.220991 kernel: device-mapper: uevent: version 1.0.3 May 8 00:01:45.221003 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:01:45.301931 kernel: raid6: avx2x4 gen() 21251 MB/s May 8 00:01:45.318903 kernel: raid6: avx2x2 gen() 20682 MB/s May 8 00:01:45.408901 kernel: raid6: avx2x1 gen() 20967 MB/s May 8 00:01:45.408926 kernel: raid6: using algorithm avx2x4 gen() 21251 MB/s May 8 00:01:45.426123 kernel: raid6: .... xor() 5794 MB/s, rmw enabled May 8 00:01:45.426146 kernel: raid6: using avx2x2 recovery algorithm May 8 00:01:45.523920 kernel: xor: automatically using best checksumming function avx May 8 00:01:45.717921 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:01:45.730801 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:01:45.743073 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:01:45.759565 systemd-udevd[415]: Using default interface naming scheme 'v255'. May 8 00:01:45.766386 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:01:45.772216 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:01:45.786702 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation May 8 00:01:45.869025 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:01:45.878023 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:01:45.947479 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:01:45.958117 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:01:45.973222 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:01:45.976432 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:01:45.979230 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:01:45.981997 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:01:46.058954 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 8 00:01:46.068693 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:01:46.068896 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:01:46.068915 kernel: GPT:9289727 != 19775487 May 8 00:01:46.068937 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:01:46.068955 kernel: GPT:9289727 != 19775487 May 8 00:01:46.068969 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:01:46.068983 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:01:46.066547 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:01:46.075488 kernel: libata version 3.00 loaded. May 8 00:01:46.076808 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:01:46.131923 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:01:46.077208 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:01:46.133569 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:01:46.196219 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:01:46.196964 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:01:46.200975 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:01:46.205884 kernel: ahci 0000:00:1f.2: version 3.0 May 8 00:01:46.373030 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 8 00:01:46.373054 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:01:46.373082 kernel: AES CTR mode by8 optimization enabled May 8 00:01:46.373095 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 8 00:01:46.373281 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 8 00:01:46.373433 kernel: BTRFS: device fsid 1c9931ea-0995-4065-8a57-32743027822a devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (478) May 8 00:01:46.373445 kernel: scsi host0: ahci May 8 00:01:46.373635 kernel: scsi host1: ahci May 8 00:01:46.373798 kernel: scsi host2: ahci May 8 00:01:46.373971 kernel: scsi host3: ahci May 8 00:01:46.374136 kernel: scsi host4: ahci May 8 00:01:46.374292 kernel: scsi host5: ahci May 8 00:01:46.374477 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 8 00:01:46.374494 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 8 00:01:46.374505 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 8 00:01:46.374516 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 8 00:01:46.374529 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 8 00:01:46.374540 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 8 00:01:46.212958 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:01:46.385298 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (474) May 8 00:01:46.285282 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:01:46.316201 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 00:01:46.386834 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:01:46.400738 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 00:01:46.402343 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 00:01:46.421760 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 00:01:46.432771 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:01:46.480162 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:01:46.536842 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:01:46.558941 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:01:46.739903 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 8 00:01:46.739942 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 8 00:01:46.740903 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 8 00:01:46.741912 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 8 00:01:46.742897 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 8 00:01:46.743914 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 8 00:01:46.744904 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 8 00:01:46.746234 kernel: ata3.00: applying bridge limits May 8 00:01:46.746247 kernel: ata3.00: configured for UDMA/100 May 8 00:01:46.746907 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 8 00:01:46.842098 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 8 00:01:46.855799 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:01:46.855828 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:01:46.954627 disk-uuid[570]: Primary Header is updated. May 8 00:01:46.954627 disk-uuid[570]: Secondary Entries is updated. May 8 00:01:46.954627 disk-uuid[570]: Secondary Header is updated. May 8 00:01:47.038215 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:01:47.041902 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:01:48.057928 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:01:48.058300 disk-uuid[583]: The operation has completed successfully. May 8 00:01:48.086373 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:01:48.086497 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:01:48.151205 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:01:48.155579 sh[597]: Success May 8 00:01:48.168908 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 8 00:01:48.244556 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:01:48.259051 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:01:48.263392 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:01:48.277726 kernel: BTRFS info (device dm-0): first mount of filesystem 1c9931ea-0995-4065-8a57-32743027822a May 8 00:01:48.277787 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:01:48.277798 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:01:48.279507 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:01:48.279526 kernel: BTRFS info (device dm-0): using free space tree May 8 00:01:48.284376 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:01:48.285181 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:01:48.294054 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:01:48.295778 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:01:48.315596 kernel: BTRFS info (device vda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:01:48.315655 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:01:48.315667 kernel: BTRFS info (device vda6): using free space tree May 8 00:01:48.318896 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:01:48.322906 kernel: BTRFS info (device vda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:01:48.329752 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:01:48.338085 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:01:48.529378 ignition[692]: Ignition 2.20.0 May 8 00:01:48.529389 ignition[692]: Stage: fetch-offline May 8 00:01:48.529524 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:01:48.529426 ignition[692]: no configs at "/usr/lib/ignition/base.d" May 8 00:01:48.529438 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:01:48.529552 ignition[692]: parsed url from cmdline: "" May 8 00:01:48.529556 ignition[692]: no config URL provided May 8 00:01:48.529562 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:01:48.529571 ignition[692]: no config at "/usr/lib/ignition/user.ign" May 8 00:01:48.529599 ignition[692]: op(1): [started] loading QEMU firmware config module May 8 00:01:48.529604 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:01:48.539884 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:01:48.541398 ignition[692]: op(1): [finished] loading QEMU firmware config module May 8 00:01:48.579199 systemd-networkd[783]: lo: Link UP May 8 00:01:48.579212 systemd-networkd[783]: lo: Gained carrier May 8 00:01:48.581454 ignition[692]: parsing config with SHA512: 21c46c065d8483d6195a55d35b34ea3d5c98ccfbb493153fe8e361e77e3e1ee880321650494a1b6acb1a52e732e153a3368b29a70d7c83d13e77e83ec91e7c4a May 8 00:01:48.581886 systemd-networkd[783]: Enumeration completed May 8 00:01:48.582023 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:01:48.582639 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:01:48.582645 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:01:48.587827 ignition[692]: fetch-offline: fetch-offline passed May 8 00:01:48.583895 systemd-networkd[783]: eth0: Link UP May 8 00:01:48.587939 ignition[692]: Ignition finished successfully May 8 00:01:48.583900 systemd-networkd[783]: eth0: Gained carrier May 8 00:01:48.583908 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:01:48.586111 systemd[1]: Reached target network.target - Network. May 8 00:01:48.587323 unknown[692]: fetched base config from "system" May 8 00:01:48.587334 unknown[692]: fetched user config from "qemu" May 8 00:01:48.590652 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:01:48.593025 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:01:48.602981 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.29/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:01:48.606112 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:01:48.658598 ignition[788]: Ignition 2.20.0 May 8 00:01:48.658614 ignition[788]: Stage: kargs May 8 00:01:48.658840 ignition[788]: no configs at "/usr/lib/ignition/base.d" May 8 00:01:48.658856 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:01:48.660080 ignition[788]: kargs: kargs passed May 8 00:01:48.660147 ignition[788]: Ignition finished successfully May 8 00:01:48.667738 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:01:48.677226 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:01:48.730205 ignition[798]: Ignition 2.20.0 May 8 00:01:48.730217 ignition[798]: Stage: disks May 8 00:01:48.730396 ignition[798]: no configs at "/usr/lib/ignition/base.d" May 8 00:01:48.730408 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:01:48.731437 ignition[798]: disks: disks passed May 8 00:01:48.731494 ignition[798]: Ignition finished successfully May 8 00:01:48.737684 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:01:48.739902 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:01:48.740002 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:01:48.740374 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:01:48.740739 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:01:48.741304 systemd[1]: Reached target basic.target - Basic System. May 8 00:01:48.762308 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:01:48.781049 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:01:48.788264 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:01:48.800071 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:01:48.895925 kernel: EXT4-fs (vda9): mounted filesystem 369e2962-701e-4244-8c1c-27f8fa83bc64 r/w with ordered data mode. Quota mode: none. May 8 00:01:48.897140 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:01:48.898288 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:01:48.922173 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:01:48.924007 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:01:48.927014 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:01:48.927089 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:01:48.937282 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (817) May 8 00:01:48.937331 kernel: BTRFS info (device vda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:01:48.937365 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:01:48.937381 kernel: BTRFS info (device vda6): using free space tree May 8 00:01:48.927122 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:01:48.940000 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:01:48.942039 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:01:48.943052 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:01:48.944304 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:01:48.986292 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:01:48.992358 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory May 8 00:01:48.998757 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:01:49.004246 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:01:49.096929 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:01:49.106029 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:01:49.107005 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:01:49.118899 kernel: BTRFS info (device vda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:01:49.161132 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:01:49.271265 ignition[934]: INFO : Ignition 2.20.0 May 8 00:01:49.271265 ignition[934]: INFO : Stage: mount May 8 00:01:49.273291 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:01:49.273291 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:01:49.273291 ignition[934]: INFO : mount: mount passed May 8 00:01:49.273291 ignition[934]: INFO : Ignition finished successfully May 8 00:01:49.277435 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:01:49.280321 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:01:49.294086 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:01:49.329038 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:01:49.358430 systemd-resolved[237]: Detected conflict on linux IN A 10.0.0.29 May 8 00:01:49.358449 systemd-resolved[237]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. May 8 00:01:49.421931 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (943) May 8 00:01:49.424471 kernel: BTRFS info (device vda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:01:49.424524 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:01:49.424540 kernel: BTRFS info (device vda6): using free space tree May 8 00:01:49.427910 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:01:49.429773 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:01:49.470378 ignition[960]: INFO : Ignition 2.20.0 May 8 00:01:49.470378 ignition[960]: INFO : Stage: files May 8 00:01:49.472455 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:01:49.472455 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:01:49.472455 ignition[960]: DEBUG : files: compiled without relabeling support, skipping May 8 00:01:49.472455 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:01:49.472455 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:01:49.479033 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:01:49.479033 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:01:49.479033 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:01:49.479033 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:01:49.479033 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 8 00:01:49.475152 unknown[960]: wrote ssh authorized keys file for user: core May 8 00:01:49.556559 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:01:49.663758 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:01:49.663758 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:01:49.668031 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 8 00:01:50.158771 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:01:50.244093 systemd-networkd[783]: eth0: Gained IPv6LL May 8 00:01:50.368513 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:01:50.368513 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 00:01:50.373034 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:01:50.373034 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:01:50.373034 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:01:50.373034 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:01:50.373034 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:01:50.373034 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:01:50.373034 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:01:50.373034 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:01:50.373034 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:01:50.373034 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:01:50.373034 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:01:50.373034 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:01:50.373034 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 8 00:01:50.805734 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 00:01:51.196720 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:01:51.196720 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 00:01:51.201318 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:01:51.203707 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:01:51.203707 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 00:01:51.203707 ignition[960]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 8 00:01:51.203707 ignition[960]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:01:51.203707 ignition[960]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:01:51.203707 ignition[960]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 8 00:01:51.203707 ignition[960]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:01:51.247063 ignition[960]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:01:51.273945 ignition[960]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:01:51.273945 ignition[960]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:01:51.273945 ignition[960]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 8 00:01:51.273945 ignition[960]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:01:51.273945 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:01:51.273945 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:01:51.273945 ignition[960]: INFO : files: files passed May 8 00:01:51.273945 ignition[960]: INFO : Ignition finished successfully May 8 00:01:51.289067 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:01:51.301056 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:01:51.301935 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:01:51.311429 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:01:51.311604 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:01:51.319503 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory May 8 00:01:51.321175 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:01:51.321175 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:01:51.345586 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:01:51.321768 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:01:51.323268 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:01:51.348176 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:01:51.375514 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:01:51.375692 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:01:51.378424 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:01:51.381064 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:01:51.381201 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:01:51.382151 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:01:51.401510 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:01:51.415059 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:01:51.424773 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:01:51.426678 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:01:51.429364 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:01:51.431731 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:01:51.431859 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:01:51.434586 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:01:51.436389 systemd[1]: Stopped target basic.target - Basic System. May 8 00:01:51.438770 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:01:51.441225 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:01:51.443588 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:01:51.446131 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:01:51.448289 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:01:51.450596 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:01:51.452620 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:01:51.454816 systemd[1]: Stopped target swap.target - Swaps. May 8 00:01:51.456614 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:01:51.456744 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:01:51.459081 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:01:51.460507 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:01:51.462617 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:01:51.462758 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:01:51.464828 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:01:51.464971 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:01:51.467340 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:01:51.467461 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:01:51.469307 systemd[1]: Stopped target paths.target - Path Units. May 8 00:01:51.471049 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:01:51.474956 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:01:51.476717 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:01:51.478715 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:01:51.480519 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:01:51.480626 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:01:51.482648 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:01:51.482756 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:01:51.485125 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:01:51.485249 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:01:51.487212 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:01:51.487332 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:01:51.496051 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:01:51.497843 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:01:51.498776 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:01:51.498968 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:01:51.501196 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:01:51.501330 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:01:51.508902 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:01:51.509036 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:01:51.514090 ignition[1015]: INFO : Ignition 2.20.0 May 8 00:01:51.514090 ignition[1015]: INFO : Stage: umount May 8 00:01:51.515812 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:01:51.515812 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:01:51.518652 ignition[1015]: INFO : umount: umount passed May 8 00:01:51.519479 ignition[1015]: INFO : Ignition finished successfully May 8 00:01:51.522692 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:01:51.522831 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:01:51.524046 systemd[1]: Stopped target network.target - Network. May 8 00:01:51.525695 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:01:51.525758 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:01:51.527370 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:01:51.527423 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:01:51.529187 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:01:51.529238 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:01:51.529515 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:01:51.529559 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:01:51.529969 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:01:51.530380 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:01:51.531676 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:01:51.539559 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:01:51.539698 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:01:51.544383 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 8 00:01:51.544781 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:01:51.544835 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:01:51.550670 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 8 00:01:51.550946 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:01:51.551074 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:01:51.554743 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 8 00:01:51.555504 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:01:51.555594 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:01:51.569065 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:01:51.569152 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:01:51.569215 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:01:51.578730 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:01:51.579818 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:01:51.582069 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:01:51.582131 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:01:51.585271 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:01:51.589284 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:01:51.600629 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:01:51.601673 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:01:51.607747 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:01:51.608865 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:01:51.611734 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:01:51.611791 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:01:51.614806 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:01:51.614854 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:01:51.635613 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:01:51.635672 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:01:51.638779 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:01:51.638838 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:01:51.674457 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:01:51.674519 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:01:51.689029 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:01:51.691268 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:01:51.691329 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:01:51.694903 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 8 00:01:51.694966 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:01:51.698595 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:01:51.698655 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:01:51.701868 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:01:51.701948 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:01:51.705523 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:01:51.706693 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:01:51.944608 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:01:51.944799 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:01:51.948221 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:01:51.950421 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:01:51.950563 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:01:51.968217 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:01:51.994963 systemd[1]: Switching root. May 8 00:01:52.040819 systemd-journald[194]: Journal stopped May 8 00:01:53.409106 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). May 8 00:01:53.409194 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:01:53.409213 kernel: SELinux: policy capability open_perms=1 May 8 00:01:53.409234 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:01:53.409249 kernel: SELinux: policy capability always_check_network=0 May 8 00:01:53.409271 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:01:53.409288 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:01:53.409302 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:01:53.409318 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:01:53.409334 kernel: audit: type=1403 audit(1746662512.484:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:01:53.409351 systemd[1]: Successfully loaded SELinux policy in 61.402ms. May 8 00:01:53.409390 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.426ms. May 8 00:01:53.409413 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:01:53.409429 systemd[1]: Detected virtualization kvm. May 8 00:01:53.409449 systemd[1]: Detected architecture x86-64. May 8 00:01:53.409464 systemd[1]: Detected first boot. May 8 00:01:53.409481 systemd[1]: Initializing machine ID from VM UUID. May 8 00:01:53.409497 zram_generator::config[1062]: No configuration found. May 8 00:01:53.409515 kernel: Guest personality initialized and is inactive May 8 00:01:53.409531 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 8 00:01:53.409547 kernel: Initialized host personality May 8 00:01:53.409566 kernel: NET: Registered PF_VSOCK protocol family May 8 00:01:53.409583 systemd[1]: Populated /etc with preset unit settings. May 8 00:01:53.409607 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 8 00:01:53.409624 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:01:53.409641 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:01:53.409656 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:01:53.409673 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:01:53.409690 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:01:53.409711 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:01:53.409732 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:01:53.409750 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:01:53.409767 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:01:53.409783 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:01:53.409801 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:01:53.409820 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:01:53.409844 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:01:53.409870 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:01:53.409953 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:01:53.409971 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:01:53.409989 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:01:53.410007 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:01:53.410023 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:01:53.410040 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:01:53.410057 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:01:53.410078 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:01:53.410095 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:01:53.410112 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:01:53.410129 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:01:53.410146 systemd[1]: Reached target slices.target - Slice Units. May 8 00:01:53.410162 systemd[1]: Reached target swap.target - Swaps. May 8 00:01:53.410180 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:01:53.410197 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:01:53.410214 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 8 00:01:53.410231 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:01:53.410252 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:01:53.410269 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:01:53.410285 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:01:53.410303 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:01:53.410319 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:01:53.410336 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:01:53.410354 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:01:53.410370 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:01:53.410386 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:01:53.410408 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:01:53.410426 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:01:53.410444 systemd[1]: Reached target machines.target - Containers. May 8 00:01:53.410460 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:01:53.410477 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:01:53.410494 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:01:53.410511 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:01:53.410528 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:01:53.410549 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:01:53.410566 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:01:53.410582 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:01:53.410599 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:01:53.410616 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:01:53.410636 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:01:53.410652 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:01:53.410668 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:01:53.410690 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:01:53.410708 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:01:53.410725 kernel: fuse: init (API version 7.39) May 8 00:01:53.410742 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:01:53.410757 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:01:53.410773 kernel: loop: module loaded May 8 00:01:53.410814 systemd-journald[1133]: Collecting audit messages is disabled. May 8 00:01:53.410845 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:01:53.410892 systemd-journald[1133]: Journal started May 8 00:01:53.410923 systemd-journald[1133]: Runtime Journal (/run/log/journal/0e5e622c0a0543138909151232124c84) is 6M, max 48.2M, 42.2M free. May 8 00:01:53.164792 systemd[1]: Queued start job for default target multi-user.target. May 8 00:01:53.178378 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 00:01:53.178987 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:01:53.414895 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:01:53.419714 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 8 00:01:53.425926 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:01:53.431033 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:01:53.431081 systemd[1]: Stopped verity-setup.service. May 8 00:01:53.431101 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:01:53.439337 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:01:53.439403 kernel: ACPI: bus type drm_connector registered May 8 00:01:53.439264 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:01:53.440508 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:01:53.441891 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:01:53.443092 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:01:53.444367 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:01:53.445645 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:01:53.447024 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:01:53.448649 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:01:53.448899 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:01:53.450523 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:01:53.450767 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:01:53.452396 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:01:53.452616 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:01:53.454085 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:01:53.454299 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:01:53.455962 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:01:53.456176 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:01:53.457710 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:01:53.457970 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:01:53.459500 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:01:53.461021 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:01:53.462620 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:01:53.464450 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 8 00:01:53.478133 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:01:53.484983 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:01:53.487422 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:01:53.488598 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:01:53.488628 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:01:53.490812 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 8 00:01:53.493218 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:01:53.496174 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:01:53.497531 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:01:53.538102 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:01:53.541346 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:01:53.542773 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:01:53.547755 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:01:53.549471 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:01:53.553596 systemd-journald[1133]: Time spent on flushing to /var/log/journal/0e5e622c0a0543138909151232124c84 is 21.976ms for 1054 entries. May 8 00:01:53.553596 systemd-journald[1133]: System Journal (/var/log/journal/0e5e622c0a0543138909151232124c84) is 8M, max 195.6M, 187.6M free. May 8 00:01:53.961345 systemd-journald[1133]: Received client request to flush runtime journal. May 8 00:01:53.961386 kernel: loop0: detected capacity change from 0 to 138176 May 8 00:01:53.961402 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:01:53.961427 kernel: loop1: detected capacity change from 0 to 147912 May 8 00:01:53.961441 kernel: loop2: detected capacity change from 0 to 210664 May 8 00:01:53.961455 kernel: loop3: detected capacity change from 0 to 138176 May 8 00:01:53.565059 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:01:53.577069 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:01:53.580054 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:01:53.583813 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:01:53.585421 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:01:53.586770 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:01:53.588366 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:01:53.595829 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:01:53.611018 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 00:01:53.687653 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:01:53.692386 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. May 8 00:01:53.692399 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. May 8 00:01:53.698715 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:01:53.938300 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:01:53.940445 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:01:53.950125 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 8 00:01:53.951994 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:01:53.957753 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:01:53.965410 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:01:53.970185 kernel: loop4: detected capacity change from 0 to 147912 May 8 00:01:54.078475 kernel: loop5: detected capacity change from 0 to 210664 May 8 00:01:54.086215 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:01:54.101444 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:01:54.110219 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 00:01:54.110984 (sd-merge)[1198]: Merged extensions into '/usr'. May 8 00:01:54.115918 systemd[1]: Reload requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:01:54.115936 systemd[1]: Reloading... May 8 00:01:54.130848 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. May 8 00:01:54.130921 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. May 8 00:01:54.201901 zram_generator::config[1243]: No configuration found. May 8 00:01:54.219121 ldconfig[1169]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:01:54.323078 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:01:54.391069 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:01:54.391942 systemd[1]: Reloading finished in 275 ms. May 8 00:01:54.411757 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:01:54.413391 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:01:54.430293 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 8 00:01:54.431932 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:01:54.452530 systemd[1]: Starting ensure-sysext.service... May 8 00:01:54.454867 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:01:54.475801 systemd[1]: Reload requested from client PID 1280 ('systemctl') (unit ensure-sysext.service)... May 8 00:01:54.475821 systemd[1]: Reloading... May 8 00:01:54.483036 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:01:54.483342 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:01:54.484524 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:01:54.484909 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. May 8 00:01:54.485021 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. May 8 00:01:54.490507 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:01:54.490522 systemd-tmpfiles[1281]: Skipping /boot May 8 00:01:54.507383 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:01:54.507401 systemd-tmpfiles[1281]: Skipping /boot May 8 00:01:54.554900 zram_generator::config[1316]: No configuration found. May 8 00:01:54.661898 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:01:54.727780 systemd[1]: Reloading finished in 251 ms. May 8 00:01:54.739061 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:01:54.760357 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:01:54.783272 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:01:54.786102 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:01:54.788595 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:01:54.794304 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:01:54.798144 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:01:54.803139 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:01:54.807452 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:01:54.807637 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:01:54.808921 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:01:54.813888 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:01:54.825564 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:01:54.826756 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:01:54.826900 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:01:54.831393 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:01:54.832551 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:01:54.834476 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:01:54.836634 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:01:54.837015 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:01:54.838293 augenrules[1378]: No rules May 8 00:01:54.838859 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:01:54.839316 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:01:54.841328 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:01:54.841565 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:01:54.843462 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:01:54.843716 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:01:54.851391 systemd-udevd[1359]: Using default interface naming scheme 'v255'. May 8 00:01:54.855111 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:01:54.855366 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:01:54.864543 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:01:54.866867 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:01:54.873959 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:01:54.875182 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:01:54.875385 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:01:54.878225 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:01:54.879302 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:01:54.880717 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:01:54.882388 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:01:54.884404 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:01:54.886533 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:01:54.888436 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:01:54.888668 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:01:54.890942 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:01:54.891178 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:01:54.893290 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:01:54.893515 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:01:54.898244 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:01:54.919634 systemd[1]: Finished ensure-sysext.service. May 8 00:01:54.924745 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:01:54.933767 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:01:54.936216 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1403) May 8 00:01:54.935300 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:01:54.940064 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:01:54.945075 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:01:54.952050 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:01:55.140717 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:01:55.142004 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:01:55.142049 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:01:55.151748 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:01:55.158059 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:01:55.159432 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:01:55.159474 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:01:55.160314 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:01:55.160670 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:01:55.164462 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:01:55.164739 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:01:55.166529 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:01:55.166768 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:01:55.168586 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:01:55.168838 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:01:55.172730 augenrules[1423]: /sbin/augenrules: No change May 8 00:01:55.182477 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 00:01:55.201923 augenrules[1453]: No rules May 8 00:01:55.204942 systemd-resolved[1353]: Positive Trust Anchors: May 8 00:01:55.204961 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:01:55.204994 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:01:55.213249 systemd-resolved[1353]: Defaulting to hostname 'linux'. May 8 00:01:55.294852 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:01:55.295069 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:01:55.295249 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:01:55.306316 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:01:55.306600 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:01:55.311961 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:01:55.316464 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:01:55.317095 kernel: ACPI: button: Power Button [PWRF] May 8 00:01:55.325200 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:01:55.332066 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:01:55.349167 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:01:55.392727 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 8 00:01:55.394117 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 8 00:01:55.394306 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 8 00:01:55.394499 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 8 00:01:55.400203 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:01:55.454774 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:01:55.522309 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:01:55.525017 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:01:55.525224 systemd-networkd[1434]: lo: Link UP May 8 00:01:55.525231 systemd-networkd[1434]: lo: Gained carrier May 8 00:01:55.525623 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:01:55.529513 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 00:01:55.532133 systemd-networkd[1434]: Enumeration completed May 8 00:01:55.532293 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:01:55.533074 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:01:55.533086 systemd-networkd[1434]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:01:55.534474 systemd-networkd[1434]: eth0: Link UP May 8 00:01:55.534484 systemd-networkd[1434]: eth0: Gained carrier May 8 00:01:55.536645 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:01:55.534499 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:01:55.536564 systemd[1]: Reached target network.target - Network. May 8 00:01:55.545150 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 8 00:01:55.546944 systemd-networkd[1434]: eth0: DHCPv4 address 10.0.0.29/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:01:55.548394 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. May 8 00:01:55.548670 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:01:56.645450 systemd-resolved[1353]: Clock change detected. Flushing caches. May 8 00:01:56.645540 systemd-timesyncd[1436]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:01:56.645598 systemd-timesyncd[1436]: Initial clock synchronization to Thu 2025-05-08 00:01:56.645407 UTC. May 8 00:01:56.648557 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:01:56.657353 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:01:56.755571 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 8 00:01:56.958728 kernel: kvm_amd: TSC scaling supported May 8 00:01:56.958798 kernel: kvm_amd: Nested Virtualization enabled May 8 00:01:56.958813 kernel: kvm_amd: Nested Paging enabled May 8 00:01:56.958825 kernel: kvm_amd: LBR virtualization supported May 8 00:01:56.960696 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 8 00:01:56.960731 kernel: kvm_amd: Virtual GIF supported May 8 00:01:56.981371 kernel: EDAC MC: Ver: 3.0.0 May 8 00:01:56.992114 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:01:57.031823 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:01:57.044522 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:01:57.151573 lvm[1487]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:01:57.182368 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:01:57.184126 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:01:57.185478 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:01:57.186848 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:01:57.188304 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:01:57.190044 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:01:57.191638 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:01:57.193411 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:01:57.194852 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:01:57.194882 systemd[1]: Reached target paths.target - Path Units. May 8 00:01:57.195948 systemd[1]: Reached target timers.target - Timer Units. May 8 00:01:57.198114 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:01:57.201047 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:01:57.205612 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 8 00:01:57.207583 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 8 00:01:57.209055 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 8 00:01:57.220916 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:01:57.222529 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 8 00:01:57.225353 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:01:57.227160 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:01:57.228522 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:01:57.229641 systemd[1]: Reached target basic.target - Basic System. May 8 00:01:57.230801 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:01:57.230857 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:01:57.232113 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:01:57.234539 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:01:57.235822 lvm[1491]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:01:57.239447 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:01:57.243028 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:01:57.247427 jq[1494]: false May 8 00:01:57.379500 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:01:57.383492 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:01:57.487893 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:01:57.493805 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:01:57.499026 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:01:57.503875 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:01:57.506216 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:01:57.506936 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:01:57.507652 dbus-daemon[1493]: [system] SELinux support is enabled May 8 00:01:57.509540 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:01:57.512713 extend-filesystems[1495]: Found loop3 May 8 00:01:57.514246 extend-filesystems[1495]: Found loop4 May 8 00:01:57.514246 extend-filesystems[1495]: Found loop5 May 8 00:01:57.514246 extend-filesystems[1495]: Found sr0 May 8 00:01:57.514246 extend-filesystems[1495]: Found vda May 8 00:01:57.514246 extend-filesystems[1495]: Found vda1 May 8 00:01:57.514246 extend-filesystems[1495]: Found vda2 May 8 00:01:57.514246 extend-filesystems[1495]: Found vda3 May 8 00:01:57.514246 extend-filesystems[1495]: Found usr May 8 00:01:57.514246 extend-filesystems[1495]: Found vda4 May 8 00:01:57.514246 extend-filesystems[1495]: Found vda6 May 8 00:01:57.514246 extend-filesystems[1495]: Found vda7 May 8 00:01:57.514246 extend-filesystems[1495]: Found vda9 May 8 00:01:57.514246 extend-filesystems[1495]: Checking size of /dev/vda9 May 8 00:01:57.513566 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:01:57.516807 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:01:57.523122 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:01:57.531703 jq[1508]: true May 8 00:01:57.532833 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:01:57.535413 update_engine[1507]: I20250508 00:01:57.533954 1507 main.cc:92] Flatcar Update Engine starting May 8 00:01:57.533132 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:01:57.535763 update_engine[1507]: I20250508 00:01:57.535553 1507 update_check_scheduler.cc:74] Next update check in 7m30s May 8 00:01:57.533514 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:01:57.533760 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:01:57.536537 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:01:57.536816 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:01:57.549741 jq[1515]: true May 8 00:01:57.553135 (ntainerd)[1519]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:01:57.557062 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:01:57.557127 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:01:57.559916 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:01:57.560672 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:01:57.562379 systemd[1]: Started update-engine.service - Update Engine. May 8 00:01:57.573562 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:01:57.651368 tar[1514]: linux-amd64/helm May 8 00:01:57.658025 extend-filesystems[1495]: Resized partition /dev/vda9 May 8 00:01:57.704482 extend-filesystems[1546]: resize2fs 1.47.1 (20-May-2024) May 8 00:01:57.723860 systemd-logind[1506]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:01:57.723896 systemd-logind[1506]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:01:57.728475 systemd-logind[1506]: New seat seat0. May 8 00:01:57.762858 sshd_keygen[1517]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:01:57.778050 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:01:57.803828 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1403) May 8 00:01:57.825812 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:01:57.910777 locksmithd[1528]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:01:57.963350 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:01:57.964890 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:01:57.974697 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:01:57.975029 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:01:58.182846 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:01:58.235469 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:01:58.247920 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:01:58.250344 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:01:58.276272 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:01:58.402352 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:01:58.598248 systemd-networkd[1434]: eth0: Gained IPv6LL May 8 00:01:58.604526 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:01:58.628725 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:01:58.642572 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 00:01:58.650783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:01:58.653874 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:01:59.223145 containerd[1519]: time="2025-05-08T00:01:59.221679083Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 8 00:01:59.223422 extend-filesystems[1546]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:01:59.223422 extend-filesystems[1546]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:01:59.223422 extend-filesystems[1546]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:01:58.678452 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:01:59.228804 extend-filesystems[1495]: Resized filesystem in /dev/vda9 May 8 00:01:58.678830 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 00:01:58.681551 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:01:59.226270 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:01:59.226672 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:01:59.303046 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:01:59.315192 containerd[1519]: time="2025-05-08T00:01:59.315137831Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:01:59.317386 containerd[1519]: time="2025-05-08T00:01:59.317042523Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:01:59.317386 containerd[1519]: time="2025-05-08T00:01:59.317072490Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:01:59.317386 containerd[1519]: time="2025-05-08T00:01:59.317087818Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:01:59.317386 containerd[1519]: time="2025-05-08T00:01:59.317276662Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:01:59.317386 containerd[1519]: time="2025-05-08T00:01:59.317293935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:01:59.317386 containerd[1519]: time="2025-05-08T00:01:59.317382190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:01:59.317536 containerd[1519]: time="2025-05-08T00:01:59.317394924Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:01:59.317677 containerd[1519]: time="2025-05-08T00:01:59.317647227Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:01:59.317677 containerd[1519]: time="2025-05-08T00:01:59.317669008Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:01:59.317717 containerd[1519]: time="2025-05-08T00:01:59.317685329Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:01:59.317717 containerd[1519]: time="2025-05-08T00:01:59.317699445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:01:59.317823 containerd[1519]: time="2025-05-08T00:01:59.317802408Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:01:59.318081 containerd[1519]: time="2025-05-08T00:01:59.318059150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:01:59.318268 containerd[1519]: time="2025-05-08T00:01:59.318245349Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:01:59.318268 containerd[1519]: time="2025-05-08T00:01:59.318262131Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:01:59.318427 containerd[1519]: time="2025-05-08T00:01:59.318401692Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:01:59.318518 containerd[1519]: time="2025-05-08T00:01:59.318495268Z" level=info msg="metadata content store policy set" policy=shared May 8 00:01:59.344269 tar[1514]: linux-amd64/LICENSE May 8 00:01:59.344698 tar[1514]: linux-amd64/README.md May 8 00:01:59.359082 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:01:59.533510 bash[1543]: Updated "/home/core/.ssh/authorized_keys" May 8 00:01:59.535581 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:01:59.538344 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:01:59.726675 containerd[1519]: time="2025-05-08T00:01:59.726585873Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:01:59.726891 containerd[1519]: time="2025-05-08T00:01:59.726731997Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:01:59.726891 containerd[1519]: time="2025-05-08T00:01:59.726756864Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:01:59.726891 containerd[1519]: time="2025-05-08T00:01:59.726779527Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:01:59.726891 containerd[1519]: time="2025-05-08T00:01:59.726799143Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:01:59.727105 containerd[1519]: time="2025-05-08T00:01:59.727073287Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:01:59.727480 containerd[1519]: time="2025-05-08T00:01:59.727455003Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:01:59.727648 containerd[1519]: time="2025-05-08T00:01:59.727618760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:01:59.727673 containerd[1519]: time="2025-05-08T00:01:59.727644188Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:01:59.727699 containerd[1519]: time="2025-05-08T00:01:59.727673874Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:01:59.727699 containerd[1519]: time="2025-05-08T00:01:59.727693380Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:01:59.727754 containerd[1519]: time="2025-05-08T00:01:59.727708689Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:01:59.727916 containerd[1519]: time="2025-05-08T00:01:59.727893335Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:01:59.727936 containerd[1519]: time="2025-05-08T00:01:59.727917901Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:01:59.727961 containerd[1519]: time="2025-05-08T00:01:59.727934863Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:01:59.727961 containerd[1519]: time="2025-05-08T00:01:59.727952086Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:01:59.727998 containerd[1519]: time="2025-05-08T00:01:59.727968126Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:01:59.727998 containerd[1519]: time="2025-05-08T00:01:59.727982763Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:01:59.728039 containerd[1519]: time="2025-05-08T00:01:59.728008131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:01:59.728039 containerd[1519]: time="2025-05-08T00:01:59.728024020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:01:59.728075 containerd[1519]: time="2025-05-08T00:01:59.728052624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:01:59.728075 containerd[1519]: time="2025-05-08T00:01:59.728066620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:01:59.728126 containerd[1519]: time="2025-05-08T00:01:59.728080066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:01:59.728126 containerd[1519]: time="2025-05-08T00:01:59.728094212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:01:59.728126 containerd[1519]: time="2025-05-08T00:01:59.728107156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:01:59.728126 containerd[1519]: time="2025-05-08T00:01:59.728120812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:01:59.728222 containerd[1519]: time="2025-05-08T00:01:59.728151730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:01:59.728222 containerd[1519]: time="2025-05-08T00:01:59.728195773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:01:59.728222 containerd[1519]: time="2025-05-08T00:01:59.728213947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:01:59.728286 containerd[1519]: time="2025-05-08T00:01:59.728227442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:01:59.728286 containerd[1519]: time="2025-05-08T00:01:59.728244694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:01:59.728286 containerd[1519]: time="2025-05-08T00:01:59.728265183Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:01:59.728358 containerd[1519]: time="2025-05-08T00:01:59.728300990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:01:59.728358 containerd[1519]: time="2025-05-08T00:01:59.728334593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:01:59.728358 containerd[1519]: time="2025-05-08T00:01:59.728352937Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:01:59.728482 containerd[1519]: time="2025-05-08T00:01:59.728422578Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:01:59.728482 containerd[1519]: time="2025-05-08T00:01:59.728444499Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:01:59.728482 containerd[1519]: time="2025-05-08T00:01:59.728457063Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:01:59.728482 containerd[1519]: time="2025-05-08T00:01:59.728471109Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:01:59.728554 containerd[1519]: time="2025-05-08T00:01:59.728483482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:01:59.728554 containerd[1519]: time="2025-05-08T00:01:59.728500304Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:01:59.728554 containerd[1519]: time="2025-05-08T00:01:59.728516424Z" level=info msg="NRI interface is disabled by configuration." May 8 00:01:59.728554 containerd[1519]: time="2025-05-08T00:01:59.728527795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:01:59.729010 containerd[1519]: time="2025-05-08T00:01:59.728942623Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:01:59.729010 containerd[1519]: time="2025-05-08T00:01:59.729009448Z" level=info msg="Connect containerd service" May 8 00:01:59.773188 containerd[1519]: time="2025-05-08T00:01:59.729067377Z" level=info msg="using legacy CRI server" May 8 00:01:59.773188 containerd[1519]: time="2025-05-08T00:01:59.729085581Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:01:59.773188 containerd[1519]: time="2025-05-08T00:01:59.729603372Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:01:59.773188 containerd[1519]: time="2025-05-08T00:01:59.731108034Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:01:59.773188 containerd[1519]: time="2025-05-08T00:01:59.731829687Z" level=info msg="Start subscribing containerd event" May 8 00:01:59.773188 containerd[1519]: time="2025-05-08T00:01:59.731961595Z" level=info msg="Start recovering state" May 8 00:01:59.773188 containerd[1519]: time="2025-05-08T00:01:59.732087411Z" level=info msg="Start event monitor" May 8 00:01:59.773188 containerd[1519]: time="2025-05-08T00:01:59.732104342Z" level=info msg="Start snapshots syncer" May 8 00:01:59.773188 containerd[1519]: time="2025-05-08T00:01:59.732114451Z" level=info msg="Start cni network conf syncer for default" May 8 00:01:59.773188 containerd[1519]: time="2025-05-08T00:01:59.732138967Z" level=info msg="Start streaming server" May 8 00:01:59.773188 containerd[1519]: time="2025-05-08T00:01:59.732793335Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:01:59.773188 containerd[1519]: time="2025-05-08T00:01:59.732863907Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:01:59.773188 containerd[1519]: time="2025-05-08T00:01:59.734134049Z" level=info msg="containerd successfully booted in 0.643518s" May 8 00:01:59.733132 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:02:00.523584 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:02:00.639694 systemd[1]: Started sshd@0-10.0.0.29:22-10.0.0.1:37922.service - OpenSSH per-connection server daemon (10.0.0.1:37922). May 8 00:02:00.732530 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 37922 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:02:00.734897 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:02:00.749628 systemd-logind[1506]: New session 1 of user core. May 8 00:02:00.751371 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:02:00.816808 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:02:00.842445 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:02:00.859868 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:02:00.864645 (systemd)[1607]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:02:00.867601 systemd-logind[1506]: New session c1 of user core. May 8 00:02:00.985479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:02:00.987739 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:02:00.990827 (kubelet)[1618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:02:01.097367 systemd[1607]: Queued start job for default target default.target. May 8 00:02:01.143910 systemd[1607]: Created slice app.slice - User Application Slice. May 8 00:02:01.143940 systemd[1607]: Reached target paths.target - Paths. May 8 00:02:01.143987 systemd[1607]: Reached target timers.target - Timers. May 8 00:02:01.145795 systemd[1607]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:02:01.158879 systemd[1607]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:02:01.159034 systemd[1607]: Reached target sockets.target - Sockets. May 8 00:02:01.159088 systemd[1607]: Reached target basic.target - Basic System. May 8 00:02:01.159152 systemd[1607]: Reached target default.target - Main User Target. May 8 00:02:01.159188 systemd[1607]: Startup finished in 222ms. May 8 00:02:01.160097 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:02:01.164101 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:02:01.169040 systemd[1]: Startup finished in 1.801s (kernel) + 7.741s (initrd) + 7.648s (userspace) = 17.191s. May 8 00:02:01.232821 systemd[1]: Started sshd@1-10.0.0.29:22-10.0.0.1:37928.service - OpenSSH per-connection server daemon (10.0.0.1:37928). May 8 00:02:01.284306 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 37928 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:02:01.306396 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:02:01.312423 systemd-logind[1506]: New session 2 of user core. May 8 00:02:01.314137 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:02:01.371499 sshd[1636]: Connection closed by 10.0.0.1 port 37928 May 8 00:02:01.372283 sshd-session[1634]: pam_unix(sshd:session): session closed for user core May 8 00:02:01.381207 systemd[1]: sshd@1-10.0.0.29:22-10.0.0.1:37928.service: Deactivated successfully. May 8 00:02:01.383395 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:02:01.385190 systemd-logind[1506]: Session 2 logged out. Waiting for processes to exit. May 8 00:02:01.392581 systemd[1]: Started sshd@2-10.0.0.29:22-10.0.0.1:37942.service - OpenSSH per-connection server daemon (10.0.0.1:37942). May 8 00:02:01.393580 systemd-logind[1506]: Removed session 2. May 8 00:02:01.429076 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 37942 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:02:01.431010 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:02:01.435699 systemd-logind[1506]: New session 3 of user core. May 8 00:02:01.444472 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:02:01.496860 sshd[1644]: Connection closed by 10.0.0.1 port 37942 May 8 00:02:01.497531 sshd-session[1641]: pam_unix(sshd:session): session closed for user core May 8 00:02:01.559596 systemd[1]: sshd@2-10.0.0.29:22-10.0.0.1:37942.service: Deactivated successfully. May 8 00:02:01.562437 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:02:01.564789 systemd-logind[1506]: Session 3 logged out. Waiting for processes to exit. May 8 00:02:01.571611 systemd[1]: Started sshd@3-10.0.0.29:22-10.0.0.1:37954.service - OpenSSH per-connection server daemon (10.0.0.1:37954). May 8 00:02:01.572627 systemd-logind[1506]: Removed session 3. May 8 00:02:01.614202 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 37954 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:02:01.616294 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:02:01.621862 systemd-logind[1506]: New session 4 of user core. May 8 00:02:01.631509 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:02:01.704392 sshd[1653]: Connection closed by 10.0.0.1 port 37954 May 8 00:02:01.704859 sshd-session[1649]: pam_unix(sshd:session): session closed for user core May 8 00:02:01.715028 systemd[1]: sshd@3-10.0.0.29:22-10.0.0.1:37954.service: Deactivated successfully. May 8 00:02:01.717045 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:02:01.719245 systemd-logind[1506]: Session 4 logged out. Waiting for processes to exit. May 8 00:02:01.731867 systemd[1]: Started sshd@4-10.0.0.29:22-10.0.0.1:37970.service - OpenSSH per-connection server daemon (10.0.0.1:37970). May 8 00:02:01.733218 systemd-logind[1506]: Removed session 4. May 8 00:02:01.767273 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 37970 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:02:01.769355 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:02:01.774044 systemd-logind[1506]: New session 5 of user core. May 8 00:02:01.776139 kubelet[1618]: E0508 00:02:01.776080 1618 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:02:01.782445 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:02:01.782698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:02:01.782882 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:02:01.783205 systemd[1]: kubelet.service: Consumed 2.104s CPU time, 245.4M memory peak. May 8 00:02:01.841991 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:02:01.842347 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:02:01.858681 sudo[1663]: pam_unix(sudo:session): session closed for user root May 8 00:02:01.860059 sshd[1662]: Connection closed by 10.0.0.1 port 37970 May 8 00:02:01.860538 sshd-session[1658]: pam_unix(sshd:session): session closed for user core May 8 00:02:01.871866 systemd[1]: sshd@4-10.0.0.29:22-10.0.0.1:37970.service: Deactivated successfully. May 8 00:02:01.873614 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:02:01.875042 systemd-logind[1506]: Session 5 logged out. Waiting for processes to exit. May 8 00:02:01.890586 systemd[1]: Started sshd@5-10.0.0.29:22-10.0.0.1:37974.service - OpenSSH per-connection server daemon (10.0.0.1:37974). May 8 00:02:01.891493 systemd-logind[1506]: Removed session 5. May 8 00:02:01.926633 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 37974 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:02:01.928116 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:02:01.932383 systemd-logind[1506]: New session 6 of user core. May 8 00:02:01.941465 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:02:01.997062 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:02:01.997545 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:02:02.001990 sudo[1673]: pam_unix(sudo:session): session closed for user root May 8 00:02:02.008948 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 8 00:02:02.009294 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:02:02.030691 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:02:02.062250 augenrules[1695]: No rules May 8 00:02:02.063209 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:02:02.063644 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:02:02.064771 sudo[1672]: pam_unix(sudo:session): session closed for user root May 8 00:02:02.066334 sshd[1671]: Connection closed by 10.0.0.1 port 37974 May 8 00:02:02.066727 sshd-session[1668]: pam_unix(sshd:session): session closed for user core May 8 00:02:02.077378 systemd[1]: sshd@5-10.0.0.29:22-10.0.0.1:37974.service: Deactivated successfully. May 8 00:02:02.079460 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:02:02.081368 systemd-logind[1506]: Session 6 logged out. Waiting for processes to exit. May 8 00:02:02.103688 systemd[1]: Started sshd@6-10.0.0.29:22-10.0.0.1:37982.service - OpenSSH per-connection server daemon (10.0.0.1:37982). May 8 00:02:02.104692 systemd-logind[1506]: Removed session 6. May 8 00:02:02.137511 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 37982 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:02:02.139167 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:02:02.143825 systemd-logind[1506]: New session 7 of user core. May 8 00:02:02.154465 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:02:02.210523 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:02:02.210878 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:02:02.867541 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:02:02.867710 (dockerd)[1727]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:02:03.137829 dockerd[1727]: time="2025-05-08T00:02:03.137650106Z" level=info msg="Starting up" May 8 00:02:03.888409 dockerd[1727]: time="2025-05-08T00:02:03.888354465Z" level=info msg="Loading containers: start." May 8 00:02:04.073354 kernel: Initializing XFRM netlink socket May 8 00:02:04.165999 systemd-networkd[1434]: docker0: Link UP May 8 00:02:04.204208 dockerd[1727]: time="2025-05-08T00:02:04.204146608Z" level=info msg="Loading containers: done." May 8 00:02:04.220148 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck810699561-merged.mount: Deactivated successfully. May 8 00:02:04.221002 dockerd[1727]: time="2025-05-08T00:02:04.220954048Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:02:04.221104 dockerd[1727]: time="2025-05-08T00:02:04.221082629Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 8 00:02:04.221255 dockerd[1727]: time="2025-05-08T00:02:04.221229134Z" level=info msg="Daemon has completed initialization" May 8 00:02:04.263911 dockerd[1727]: time="2025-05-08T00:02:04.263826018Z" level=info msg="API listen on /run/docker.sock" May 8 00:02:04.264086 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:02:05.086099 containerd[1519]: time="2025-05-08T00:02:05.086048389Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 8 00:02:06.000204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount180635824.mount: Deactivated successfully. May 8 00:02:08.024563 containerd[1519]: time="2025-05-08T00:02:08.024473324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:08.029920 containerd[1519]: time="2025-05-08T00:02:08.029851951Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 8 00:02:08.037679 containerd[1519]: time="2025-05-08T00:02:08.037634075Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:08.046783 containerd[1519]: time="2025-05-08T00:02:08.046733109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:08.047917 containerd[1519]: time="2025-05-08T00:02:08.047878067Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.961780105s" May 8 00:02:08.047994 containerd[1519]: time="2025-05-08T00:02:08.047933350Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 8 00:02:08.102620 containerd[1519]: time="2025-05-08T00:02:08.102579040Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 8 00:02:10.696831 containerd[1519]: time="2025-05-08T00:02:10.696751610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:10.702508 containerd[1519]: time="2025-05-08T00:02:10.702459514Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 8 00:02:10.708895 containerd[1519]: time="2025-05-08T00:02:10.708842655Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:10.719460 containerd[1519]: time="2025-05-08T00:02:10.719415122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:10.720797 containerd[1519]: time="2025-05-08T00:02:10.720734947Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.618108117s" May 8 00:02:10.720797 containerd[1519]: time="2025-05-08T00:02:10.720782917Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 8 00:02:10.748583 containerd[1519]: time="2025-05-08T00:02:10.748518050Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 8 00:02:11.940670 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:02:12.000698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:02:12.278608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:02:12.283140 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:02:12.347959 kubelet[2011]: E0508 00:02:12.347813 2011 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:02:12.355580 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:02:12.355840 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:02:12.356268 systemd[1]: kubelet.service: Consumed 341ms CPU time, 98.3M memory peak. May 8 00:02:13.030979 containerd[1519]: time="2025-05-08T00:02:13.030914879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:13.032223 containerd[1519]: time="2025-05-08T00:02:13.032183759Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 8 00:02:13.033724 containerd[1519]: time="2025-05-08T00:02:13.033695153Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:13.036941 containerd[1519]: time="2025-05-08T00:02:13.036872472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:13.037954 containerd[1519]: time="2025-05-08T00:02:13.037921169Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 2.289343095s" May 8 00:02:13.037954 containerd[1519]: time="2025-05-08T00:02:13.037951766Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 8 00:02:13.063380 containerd[1519]: time="2025-05-08T00:02:13.063334999Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 00:02:15.037241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2090842003.mount: Deactivated successfully. May 8 00:02:16.228725 containerd[1519]: time="2025-05-08T00:02:16.228643842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:16.229426 containerd[1519]: time="2025-05-08T00:02:16.229343144Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 8 00:02:16.230938 containerd[1519]: time="2025-05-08T00:02:16.230910403Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:16.233702 containerd[1519]: time="2025-05-08T00:02:16.233665730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:16.234432 containerd[1519]: time="2025-05-08T00:02:16.234399526Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 3.171027338s" May 8 00:02:16.234476 containerd[1519]: time="2025-05-08T00:02:16.234435003Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 8 00:02:16.260420 containerd[1519]: time="2025-05-08T00:02:16.260366163Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:02:17.179691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2475405768.mount: Deactivated successfully. May 8 00:02:19.659967 containerd[1519]: time="2025-05-08T00:02:19.659886417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:19.660686 containerd[1519]: time="2025-05-08T00:02:19.660632356Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 8 00:02:19.662207 containerd[1519]: time="2025-05-08T00:02:19.662170230Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:19.665761 containerd[1519]: time="2025-05-08T00:02:19.665694479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:19.667087 containerd[1519]: time="2025-05-08T00:02:19.667048519Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.406630659s" May 8 00:02:19.667140 containerd[1519]: time="2025-05-08T00:02:19.667087492Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 8 00:02:19.697416 containerd[1519]: time="2025-05-08T00:02:19.697361997Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 8 00:02:21.345306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2252743888.mount: Deactivated successfully. May 8 00:02:21.352529 containerd[1519]: time="2025-05-08T00:02:21.352454887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:21.353228 containerd[1519]: time="2025-05-08T00:02:21.353167974Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 8 00:02:21.354469 containerd[1519]: time="2025-05-08T00:02:21.354431083Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:21.357421 containerd[1519]: time="2025-05-08T00:02:21.357378070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:21.358302 containerd[1519]: time="2025-05-08T00:02:21.358242351Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.660841841s" May 8 00:02:21.358302 containerd[1519]: time="2025-05-08T00:02:21.358293647Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 8 00:02:21.386674 containerd[1519]: time="2025-05-08T00:02:21.386621822Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 8 00:02:22.002424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2758207019.mount: Deactivated successfully. May 8 00:02:22.440394 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:02:22.456660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:02:22.856217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:02:22.862630 (kubelet)[2155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:02:22.994256 kubelet[2155]: E0508 00:02:22.994171 2155 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:02:22.999468 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:02:22.999732 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:02:23.000179 systemd[1]: kubelet.service: Consumed 215ms CPU time, 98.9M memory peak. May 8 00:02:24.151577 containerd[1519]: time="2025-05-08T00:02:24.151494986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:24.152401 containerd[1519]: time="2025-05-08T00:02:24.152304064Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 8 00:02:24.153730 containerd[1519]: time="2025-05-08T00:02:24.153674003Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:24.159972 containerd[1519]: time="2025-05-08T00:02:24.157800031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:24.160955 containerd[1519]: time="2025-05-08T00:02:24.160898031Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.774234891s" May 8 00:02:24.160955 containerd[1519]: time="2025-05-08T00:02:24.160947864Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 8 00:02:26.935552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:02:26.935717 systemd[1]: kubelet.service: Consumed 215ms CPU time, 98.9M memory peak. May 8 00:02:26.954733 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:02:26.974978 systemd[1]: Reload requested from client PID 2249 ('systemctl') (unit session-7.scope)... May 8 00:02:26.975000 systemd[1]: Reloading... May 8 00:02:27.082629 zram_generator::config[2293]: No configuration found. May 8 00:02:27.300188 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:02:27.407009 systemd[1]: Reloading finished in 431 ms. May 8 00:02:27.463915 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:02:27.466835 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:02:27.467154 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:02:27.467201 systemd[1]: kubelet.service: Consumed 159ms CPU time, 83.6M memory peak. May 8 00:02:27.469111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:02:27.624418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:02:27.629428 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:02:27.946821 kubelet[2343]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:02:27.946821 kubelet[2343]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:02:27.946821 kubelet[2343]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:02:27.947245 kubelet[2343]: I0508 00:02:27.946875 2343 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:02:28.347481 kubelet[2343]: I0508 00:02:28.347302 2343 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:02:28.347481 kubelet[2343]: I0508 00:02:28.347369 2343 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:02:28.347661 kubelet[2343]: I0508 00:02:28.347645 2343 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:02:28.363565 kubelet[2343]: I0508 00:02:28.363501 2343 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:02:28.364208 kubelet[2343]: E0508 00:02:28.364158 2343 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.29:6443: connect: connection refused May 8 00:02:28.378825 kubelet[2343]: I0508 00:02:28.378364 2343 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:02:28.379956 kubelet[2343]: I0508 00:02:28.379861 2343 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:02:28.380598 kubelet[2343]: I0508 00:02:28.380022 2343 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:02:28.381197 kubelet[2343]: I0508 00:02:28.381164 2343 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:02:28.381197 kubelet[2343]: I0508 00:02:28.381192 2343 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:02:28.382048 kubelet[2343]: I0508 00:02:28.382023 2343 state_mem.go:36] "Initialized new in-memory state store" May 8 00:02:28.382761 kubelet[2343]: I0508 00:02:28.382734 2343 kubelet.go:400] "Attempting to sync node with API server" May 8 00:02:28.382761 kubelet[2343]: I0508 00:02:28.382761 2343 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:02:28.382842 kubelet[2343]: I0508 00:02:28.382805 2343 kubelet.go:312] "Adding apiserver pod source" May 8 00:02:28.382884 kubelet[2343]: I0508 00:02:28.382844 2343 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:02:28.385561 kubelet[2343]: W0508 00:02:28.385460 2343 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.29:6443: connect: connection refused May 8 00:02:28.385561 kubelet[2343]: E0508 00:02:28.385529 2343 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.29:6443: connect: connection refused May 8 00:02:28.386789 kubelet[2343]: W0508 00:02:28.386739 2343 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.29:6443: connect: connection refused May 8 00:02:28.386789 kubelet[2343]: E0508 00:02:28.386783 2343 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.29:6443: connect: connection refused May 8 00:02:28.388550 kubelet[2343]: I0508 00:02:28.388516 2343 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:02:28.390473 kubelet[2343]: I0508 00:02:28.390441 2343 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:02:28.390582 kubelet[2343]: W0508 00:02:28.390545 2343 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:02:28.391515 kubelet[2343]: I0508 00:02:28.391475 2343 server.go:1264] "Started kubelet" May 8 00:02:28.392313 kubelet[2343]: I0508 00:02:28.392044 2343 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:02:28.392313 kubelet[2343]: I0508 00:02:28.392572 2343 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:02:28.392313 kubelet[2343]: I0508 00:02:28.392625 2343 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:02:28.393211 kubelet[2343]: I0508 00:02:28.393177 2343 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:02:28.393918 kubelet[2343]: I0508 00:02:28.393885 2343 server.go:455] "Adding debug handlers to kubelet server" May 8 00:02:28.400101 kubelet[2343]: E0508 00:02:28.399042 2343 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:02:28.400101 kubelet[2343]: I0508 00:02:28.399114 2343 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:02:28.400101 kubelet[2343]: I0508 00:02:28.399229 2343 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:02:28.400101 kubelet[2343]: I0508 00:02:28.399310 2343 reconciler.go:26] "Reconciler: start to sync state" May 8 00:02:28.400101 kubelet[2343]: W0508 00:02:28.399755 2343 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.29:6443: connect: connection refused May 8 00:02:28.400101 kubelet[2343]: E0508 00:02:28.399807 2343 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.29:6443: connect: connection refused May 8 00:02:28.400101 kubelet[2343]: E0508 00:02:28.400044 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.29:6443: connect: connection refused" interval="200ms" May 8 00:02:28.401291 kubelet[2343]: E0508 00:02:28.401132 2343 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.29:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.29:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d64556359a98e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:02:28.391438734 +0000 UTC m=+0.757178972,LastTimestamp:2025-05-08 00:02:28.391438734 +0000 UTC m=+0.757178972,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:02:28.401582 kubelet[2343]: E0508 00:02:28.401540 2343 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:02:28.401814 kubelet[2343]: I0508 00:02:28.401790 2343 factory.go:221] Registration of the systemd container factory successfully May 8 00:02:28.402090 kubelet[2343]: I0508 00:02:28.401898 2343 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:02:28.403578 kubelet[2343]: I0508 00:02:28.403547 2343 factory.go:221] Registration of the containerd container factory successfully May 8 00:02:28.414193 kubelet[2343]: I0508 00:02:28.414126 2343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:02:28.415746 kubelet[2343]: I0508 00:02:28.415694 2343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:02:28.415746 kubelet[2343]: I0508 00:02:28.415742 2343 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:02:28.415876 kubelet[2343]: I0508 00:02:28.415767 2343 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:02:28.415876 kubelet[2343]: E0508 00:02:28.415805 2343 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:02:28.417889 kubelet[2343]: W0508 00:02:28.417807 2343 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.29:6443: connect: connection refused May 8 00:02:28.417889 kubelet[2343]: E0508 00:02:28.417884 2343 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.29:6443: connect: connection refused May 8 00:02:28.418331 kubelet[2343]: I0508 00:02:28.418274 2343 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:02:28.418331 kubelet[2343]: I0508 00:02:28.418291 2343 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:02:28.418604 kubelet[2343]: I0508 00:02:28.418474 2343 state_mem.go:36] "Initialized new in-memory state store" May 8 00:02:28.501135 kubelet[2343]: I0508 00:02:28.501093 2343 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:02:28.501591 kubelet[2343]: E0508 00:02:28.501554 2343 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.29:6443/api/v1/nodes\": dial tcp 10.0.0.29:6443: connect: connection refused" node="localhost" May 8 00:02:28.516891 kubelet[2343]: E0508 00:02:28.516813 2343 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:02:28.600831 kubelet[2343]: E0508 00:02:28.600647 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.29:6443: connect: connection refused" interval="400ms" May 8 00:02:28.644193 kubelet[2343]: I0508 00:02:28.644148 2343 policy_none.go:49] "None policy: Start" May 8 00:02:28.644966 kubelet[2343]: I0508 00:02:28.644937 2343 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:02:28.644966 kubelet[2343]: I0508 00:02:28.644967 2343 state_mem.go:35] "Initializing new in-memory state store" May 8 00:02:28.651367 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:02:28.670775 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:02:28.676455 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:02:28.691358 kubelet[2343]: I0508 00:02:28.691298 2343 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:02:28.691637 kubelet[2343]: I0508 00:02:28.691587 2343 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:02:28.692410 kubelet[2343]: I0508 00:02:28.691754 2343 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:02:28.692962 kubelet[2343]: E0508 00:02:28.692923 2343 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:02:28.703041 kubelet[2343]: I0508 00:02:28.703014 2343 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:02:28.703392 kubelet[2343]: E0508 00:02:28.703362 2343 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.29:6443/api/v1/nodes\": dial tcp 10.0.0.29:6443: connect: connection refused" node="localhost" May 8 00:02:28.717525 kubelet[2343]: I0508 00:02:28.717484 2343 topology_manager.go:215] "Topology Admit Handler" podUID="fc6a1dea93b2cd925a2610910e1e951e" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:02:28.718373 kubelet[2343]: I0508 00:02:28.718352 2343 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:02:28.719093 kubelet[2343]: I0508 00:02:28.719067 2343 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:02:28.724924 systemd[1]: Created slice kubepods-burstable-podfc6a1dea93b2cd925a2610910e1e951e.slice - libcontainer container kubepods-burstable-podfc6a1dea93b2cd925a2610910e1e951e.slice. May 8 00:02:28.747615 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 8 00:02:28.751712 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 8 00:02:28.801653 kubelet[2343]: I0508 00:02:28.801611 2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:02:28.801653 kubelet[2343]: I0508 00:02:28.801648 2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc6a1dea93b2cd925a2610910e1e951e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fc6a1dea93b2cd925a2610910e1e951e\") " pod="kube-system/kube-apiserver-localhost" May 8 00:02:28.801825 kubelet[2343]: I0508 00:02:28.801676 2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc6a1dea93b2cd925a2610910e1e951e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fc6a1dea93b2cd925a2610910e1e951e\") " pod="kube-system/kube-apiserver-localhost" May 8 00:02:28.801825 kubelet[2343]: I0508 00:02:28.801697 2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:02:28.801825 kubelet[2343]: I0508 00:02:28.801719 2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:02:28.801825 kubelet[2343]: I0508 00:02:28.801738 2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc6a1dea93b2cd925a2610910e1e951e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fc6a1dea93b2cd925a2610910e1e951e\") " pod="kube-system/kube-apiserver-localhost" May 8 00:02:28.801825 kubelet[2343]: I0508 00:02:28.801757 2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:02:28.801957 kubelet[2343]: I0508 00:02:28.801778 2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:02:28.801957 kubelet[2343]: I0508 00:02:28.801798 2343 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:02:29.002186 kubelet[2343]: E0508 00:02:29.002121 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.29:6443: connect: connection refused" interval="800ms" May 8 00:02:29.045405 containerd[1519]: time="2025-05-08T00:02:29.045349059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fc6a1dea93b2cd925a2610910e1e951e,Namespace:kube-system,Attempt:0,}" May 8 00:02:29.050746 containerd[1519]: time="2025-05-08T00:02:29.050717359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 8 00:02:29.054308 containerd[1519]: time="2025-05-08T00:02:29.054270847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 8 00:02:29.104730 kubelet[2343]: I0508 00:02:29.104708 2343 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:02:29.105025 kubelet[2343]: E0508 00:02:29.104981 2343 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.29:6443/api/v1/nodes\": dial tcp 10.0.0.29:6443: connect: connection refused" node="localhost" May 8 00:02:29.288505 kubelet[2343]: W0508 00:02:29.288184 2343 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.29:6443: connect: connection refused May 8 00:02:29.288505 kubelet[2343]: E0508 00:02:29.288276 2343 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.29:6443: connect: connection refused May 8 00:02:29.406493 kubelet[2343]: W0508 00:02:29.406412 2343 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.29:6443: connect: connection refused May 8 00:02:29.406493 kubelet[2343]: E0508 00:02:29.406487 2343 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.29:6443: connect: connection refused May 8 00:02:29.419250 kubelet[2343]: W0508 00:02:29.419196 2343 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.29:6443: connect: connection refused May 8 00:02:29.419250 kubelet[2343]: E0508 00:02:29.419245 2343 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.29:6443: connect: connection refused May 8 00:02:29.792143 kubelet[2343]: W0508 00:02:29.792038 2343 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.29:6443: connect: connection refused May 8 00:02:29.792143 kubelet[2343]: E0508 00:02:29.792139 2343 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.29:6443: connect: connection refused May 8 00:02:29.802715 kubelet[2343]: E0508 00:02:29.802648 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.29:6443: connect: connection refused" interval="1.6s" May 8 00:02:29.907896 kubelet[2343]: I0508 00:02:29.907812 2343 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:02:29.908687 kubelet[2343]: E0508 00:02:29.908615 2343 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.29:6443/api/v1/nodes\": dial tcp 10.0.0.29:6443: connect: connection refused" node="localhost" May 8 00:02:30.009816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2629615078.mount: Deactivated successfully. May 8 00:02:30.017817 containerd[1519]: time="2025-05-08T00:02:30.017743215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:02:30.020587 containerd[1519]: time="2025-05-08T00:02:30.020481770Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 00:02:30.021601 containerd[1519]: time="2025-05-08T00:02:30.021558726Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:02:30.024253 containerd[1519]: time="2025-05-08T00:02:30.024196457Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:02:30.039828 containerd[1519]: time="2025-05-08T00:02:30.039769338Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:02:30.041776 containerd[1519]: time="2025-05-08T00:02:30.041687263Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:02:30.043531 containerd[1519]: time="2025-05-08T00:02:30.043345508Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:02:30.045344 containerd[1519]: time="2025-05-08T00:02:30.045286487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:02:30.050084 containerd[1519]: time="2025-05-08T00:02:30.050027350Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 995.642073ms" May 8 00:02:30.050863 containerd[1519]: time="2025-05-08T00:02:30.050812784Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.000033555s" May 8 00:02:30.051914 containerd[1519]: time="2025-05-08T00:02:30.051844080Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.006358558s" May 8 00:02:30.398516 containerd[1519]: time="2025-05-08T00:02:30.398007924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:02:30.398516 containerd[1519]: time="2025-05-08T00:02:30.398096115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:02:30.398667 containerd[1519]: time="2025-05-08T00:02:30.398130320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:02:30.398667 containerd[1519]: time="2025-05-08T00:02:30.398261393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:02:30.399802 containerd[1519]: time="2025-05-08T00:02:30.399255728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:02:30.399802 containerd[1519]: time="2025-05-08T00:02:30.399562339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:02:30.399802 containerd[1519]: time="2025-05-08T00:02:30.399577829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:02:30.399934 containerd[1519]: time="2025-05-08T00:02:30.399882856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:02:30.400169 containerd[1519]: time="2025-05-08T00:02:30.400089575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:02:30.400169 containerd[1519]: time="2025-05-08T00:02:30.400150382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:02:30.400271 containerd[1519]: time="2025-05-08T00:02:30.400166483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:02:30.400674 containerd[1519]: time="2025-05-08T00:02:30.400260344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:02:30.471148 kubelet[2343]: E0508 00:02:30.471112 2343 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.29:6443: connect: connection refused May 8 00:02:30.475523 systemd[1]: Started cri-containerd-144ed34042e4ca6a52a6e09de831a0166d847f67010b11cb7e540ec2ba17b869.scope - libcontainer container 144ed34042e4ca6a52a6e09de831a0166d847f67010b11cb7e540ec2ba17b869. May 8 00:02:30.477696 systemd[1]: Started cri-containerd-dffae9b5ca2d561c5b7ca2086a5e9c91dd062dfcb8325fdbb7ef93539e50a584.scope - libcontainer container dffae9b5ca2d561c5b7ca2086a5e9c91dd062dfcb8325fdbb7ef93539e50a584. May 8 00:02:30.481984 systemd[1]: Started cri-containerd-8c922ad24db6edb162d23fb839e0ce2225bf2ee43ff059763d6cd86496419384.scope - libcontainer container 8c922ad24db6edb162d23fb839e0ce2225bf2ee43ff059763d6cd86496419384. May 8 00:02:30.540253 containerd[1519]: time="2025-05-08T00:02:30.539519757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"144ed34042e4ca6a52a6e09de831a0166d847f67010b11cb7e540ec2ba17b869\"" May 8 00:02:30.541067 containerd[1519]: time="2025-05-08T00:02:30.540908512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fc6a1dea93b2cd925a2610910e1e951e,Namespace:kube-system,Attempt:0,} returns sandbox id \"dffae9b5ca2d561c5b7ca2086a5e9c91dd062dfcb8325fdbb7ef93539e50a584\"" May 8 00:02:30.545019 containerd[1519]: time="2025-05-08T00:02:30.544988793Z" level=info msg="CreateContainer within sandbox \"dffae9b5ca2d561c5b7ca2086a5e9c91dd062dfcb8325fdbb7ef93539e50a584\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:02:30.545225 containerd[1519]: time="2025-05-08T00:02:30.545196704Z" level=info msg="CreateContainer within sandbox \"144ed34042e4ca6a52a6e09de831a0166d847f67010b11cb7e540ec2ba17b869\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:02:30.547049 containerd[1519]: time="2025-05-08T00:02:30.547011179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c922ad24db6edb162d23fb839e0ce2225bf2ee43ff059763d6cd86496419384\"" May 8 00:02:30.550294 containerd[1519]: time="2025-05-08T00:02:30.550247895Z" level=info msg="CreateContainer within sandbox \"8c922ad24db6edb162d23fb839e0ce2225bf2ee43ff059763d6cd86496419384\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:02:30.568410 containerd[1519]: time="2025-05-08T00:02:30.568174690Z" level=info msg="CreateContainer within sandbox \"dffae9b5ca2d561c5b7ca2086a5e9c91dd062dfcb8325fdbb7ef93539e50a584\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0be1a42691d80f5a6bf9a74f5300997abdadcc818fe16e617a8563e69c67098c\"" May 8 00:02:30.568893 containerd[1519]: time="2025-05-08T00:02:30.568870571Z" level=info msg="StartContainer for \"0be1a42691d80f5a6bf9a74f5300997abdadcc818fe16e617a8563e69c67098c\"" May 8 00:02:30.572552 containerd[1519]: time="2025-05-08T00:02:30.572507678Z" level=info msg="CreateContainer within sandbox \"144ed34042e4ca6a52a6e09de831a0166d847f67010b11cb7e540ec2ba17b869\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"16663d7e6756abea3718a18bc1deca6cf28489594f4918af1cda3060ce47cd83\"" May 8 00:02:30.573101 containerd[1519]: time="2025-05-08T00:02:30.573068068Z" level=info msg="StartContainer for \"16663d7e6756abea3718a18bc1deca6cf28489594f4918af1cda3060ce47cd83\"" May 8 00:02:30.579781 containerd[1519]: time="2025-05-08T00:02:30.579638937Z" level=info msg="CreateContainer within sandbox \"8c922ad24db6edb162d23fb839e0ce2225bf2ee43ff059763d6cd86496419384\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"048f7212414625b38a88e6005b5f9a321a32490720f0982c5c0060257531c2a2\"" May 8 00:02:30.581397 containerd[1519]: time="2025-05-08T00:02:30.580585850Z" level=info msg="StartContainer for \"048f7212414625b38a88e6005b5f9a321a32490720f0982c5c0060257531c2a2\"" May 8 00:02:30.620691 systemd[1]: Started cri-containerd-0be1a42691d80f5a6bf9a74f5300997abdadcc818fe16e617a8563e69c67098c.scope - libcontainer container 0be1a42691d80f5a6bf9a74f5300997abdadcc818fe16e617a8563e69c67098c. May 8 00:02:30.624199 systemd[1]: Started cri-containerd-048f7212414625b38a88e6005b5f9a321a32490720f0982c5c0060257531c2a2.scope - libcontainer container 048f7212414625b38a88e6005b5f9a321a32490720f0982c5c0060257531c2a2. May 8 00:02:30.629217 systemd[1]: Started cri-containerd-16663d7e6756abea3718a18bc1deca6cf28489594f4918af1cda3060ce47cd83.scope - libcontainer container 16663d7e6756abea3718a18bc1deca6cf28489594f4918af1cda3060ce47cd83. May 8 00:02:30.675661 containerd[1519]: time="2025-05-08T00:02:30.675525131Z" level=info msg="StartContainer for \"0be1a42691d80f5a6bf9a74f5300997abdadcc818fe16e617a8563e69c67098c\" returns successfully" May 8 00:02:30.675903 containerd[1519]: time="2025-05-08T00:02:30.675605626Z" level=info msg="StartContainer for \"048f7212414625b38a88e6005b5f9a321a32490720f0982c5c0060257531c2a2\" returns successfully" May 8 00:02:30.687144 containerd[1519]: time="2025-05-08T00:02:30.687087917Z" level=info msg="StartContainer for \"16663d7e6756abea3718a18bc1deca6cf28489594f4918af1cda3060ce47cd83\" returns successfully" May 8 00:02:31.514809 kubelet[2343]: I0508 00:02:31.514760 2343 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:02:31.979826 kubelet[2343]: E0508 00:02:31.979772 2343 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:02:32.084710 kubelet[2343]: I0508 00:02:32.084487 2343 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:02:32.125686 kubelet[2343]: E0508 00:02:32.125304 2343 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:02:32.385112 kubelet[2343]: I0508 00:02:32.384914 2343 apiserver.go:52] "Watching apiserver" May 8 00:02:32.399631 kubelet[2343]: I0508 00:02:32.399583 2343 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:02:32.442486 kubelet[2343]: E0508 00:02:32.442423 2343 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 8 00:02:34.071603 systemd[1]: Reload requested from client PID 2613 ('systemctl') (unit session-7.scope)... May 8 00:02:34.071625 systemd[1]: Reloading... May 8 00:02:34.165358 zram_generator::config[2658]: No configuration found. May 8 00:02:34.286044 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:02:34.406947 systemd[1]: Reloading finished in 334 ms. May 8 00:02:34.431234 kubelet[2343]: I0508 00:02:34.431051 2343 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:02:34.431209 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:02:34.448933 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:02:34.449291 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:02:34.449381 systemd[1]: kubelet.service: Consumed 1.286s CPU time, 116.3M memory peak. May 8 00:02:34.458624 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:02:34.637235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:02:34.643557 (kubelet)[2702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:02:34.698155 kubelet[2702]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:02:34.698155 kubelet[2702]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:02:34.698155 kubelet[2702]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:02:34.698643 kubelet[2702]: I0508 00:02:34.698214 2702 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:02:34.703426 kubelet[2702]: I0508 00:02:34.703382 2702 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:02:34.703426 kubelet[2702]: I0508 00:02:34.703413 2702 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:02:34.703757 kubelet[2702]: I0508 00:02:34.703731 2702 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:02:34.705188 kubelet[2702]: I0508 00:02:34.705162 2702 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:02:34.706417 kubelet[2702]: I0508 00:02:34.706367 2702 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:02:34.718563 kubelet[2702]: I0508 00:02:34.718511 2702 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:02:34.718900 kubelet[2702]: I0508 00:02:34.718839 2702 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:02:34.719104 kubelet[2702]: I0508 00:02:34.718892 2702 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:02:34.719204 kubelet[2702]: I0508 00:02:34.719118 2702 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:02:34.719204 kubelet[2702]: I0508 00:02:34.719130 2702 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:02:34.719204 kubelet[2702]: I0508 00:02:34.719180 2702 state_mem.go:36] "Initialized new in-memory state store" May 8 00:02:34.719328 kubelet[2702]: I0508 00:02:34.719301 2702 kubelet.go:400] "Attempting to sync node with API server" May 8 00:02:34.719361 kubelet[2702]: I0508 00:02:34.719336 2702 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:02:34.719391 kubelet[2702]: I0508 00:02:34.719362 2702 kubelet.go:312] "Adding apiserver pod source" May 8 00:02:34.719391 kubelet[2702]: I0508 00:02:34.719380 2702 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:02:34.723163 kubelet[2702]: I0508 00:02:34.723125 2702 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:02:34.723533 kubelet[2702]: I0508 00:02:34.723504 2702 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:02:34.724006 kubelet[2702]: I0508 00:02:34.723981 2702 server.go:1264] "Started kubelet" May 8 00:02:34.725909 kubelet[2702]: I0508 00:02:34.725588 2702 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:02:34.725909 kubelet[2702]: I0508 00:02:34.725683 2702 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:02:34.726098 kubelet[2702]: I0508 00:02:34.725933 2702 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:02:34.726098 kubelet[2702]: I0508 00:02:34.725977 2702 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:02:34.727719 kubelet[2702]: I0508 00:02:34.726800 2702 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:02:34.727719 kubelet[2702]: I0508 00:02:34.726935 2702 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:02:34.727719 kubelet[2702]: I0508 00:02:34.727227 2702 reconciler.go:26] "Reconciler: start to sync state" May 8 00:02:34.730566 kubelet[2702]: I0508 00:02:34.730537 2702 server.go:455] "Adding debug handlers to kubelet server" May 8 00:02:34.730626 kubelet[2702]: I0508 00:02:34.730612 2702 factory.go:221] Registration of the systemd container factory successfully May 8 00:02:34.730740 kubelet[2702]: I0508 00:02:34.730711 2702 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:02:34.732358 kubelet[2702]: I0508 00:02:34.732286 2702 factory.go:221] Registration of the containerd container factory successfully May 8 00:02:34.737920 kubelet[2702]: E0508 00:02:34.737878 2702 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:02:34.739960 kubelet[2702]: I0508 00:02:34.739926 2702 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:02:34.741296 kubelet[2702]: I0508 00:02:34.741277 2702 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:02:34.741413 kubelet[2702]: I0508 00:02:34.741401 2702 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:02:34.741492 kubelet[2702]: I0508 00:02:34.741482 2702 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:02:34.741596 kubelet[2702]: E0508 00:02:34.741573 2702 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:02:34.766518 kubelet[2702]: I0508 00:02:34.766449 2702 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:02:34.766518 kubelet[2702]: I0508 00:02:34.766474 2702 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:02:34.766518 kubelet[2702]: I0508 00:02:34.766494 2702 state_mem.go:36] "Initialized new in-memory state store" May 8 00:02:34.766710 kubelet[2702]: I0508 00:02:34.766647 2702 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:02:34.766710 kubelet[2702]: I0508 00:02:34.766660 2702 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:02:34.766710 kubelet[2702]: I0508 00:02:34.766688 2702 policy_none.go:49] "None policy: Start" May 8 00:02:34.767174 kubelet[2702]: I0508 00:02:34.767155 2702 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:02:34.767219 kubelet[2702]: I0508 00:02:34.767178 2702 state_mem.go:35] "Initializing new in-memory state store" May 8 00:02:34.767305 kubelet[2702]: I0508 00:02:34.767291 2702 state_mem.go:75] "Updated machine memory state" May 8 00:02:34.773857 kubelet[2702]: I0508 00:02:34.773806 2702 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:02:34.774115 kubelet[2702]: I0508 00:02:34.774050 2702 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:02:34.774380 kubelet[2702]: I0508 00:02:34.774265 2702 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:02:34.829391 kubelet[2702]: I0508 00:02:34.829315 2702 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:02:34.841891 kubelet[2702]: I0508 00:02:34.841802 2702 topology_manager.go:215] "Topology Admit Handler" podUID="fc6a1dea93b2cd925a2610910e1e951e" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:02:34.842157 kubelet[2702]: I0508 00:02:34.841965 2702 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:02:34.842157 kubelet[2702]: I0508 00:02:34.842078 2702 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:02:34.855906 kubelet[2702]: I0508 00:02:34.855844 2702 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 8 00:02:34.856061 kubelet[2702]: I0508 00:02:34.855948 2702 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:02:34.928838 kubelet[2702]: I0508 00:02:34.928762 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:02:34.928838 kubelet[2702]: I0508 00:02:34.928829 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc6a1dea93b2cd925a2610910e1e951e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fc6a1dea93b2cd925a2610910e1e951e\") " pod="kube-system/kube-apiserver-localhost" May 8 00:02:34.929063 kubelet[2702]: I0508 00:02:34.928862 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:02:34.929063 kubelet[2702]: I0508 00:02:34.928904 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:02:34.929063 kubelet[2702]: I0508 00:02:34.928941 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:02:34.929063 kubelet[2702]: I0508 00:02:34.928967 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:02:34.929063 kubelet[2702]: I0508 00:02:34.929032 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:02:34.929218 kubelet[2702]: I0508 00:02:34.929071 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc6a1dea93b2cd925a2610910e1e951e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fc6a1dea93b2cd925a2610910e1e951e\") " pod="kube-system/kube-apiserver-localhost" May 8 00:02:34.929218 kubelet[2702]: I0508 00:02:34.929096 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc6a1dea93b2cd925a2610910e1e951e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fc6a1dea93b2cd925a2610910e1e951e\") " pod="kube-system/kube-apiserver-localhost" May 8 00:02:35.072924 sudo[2737]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 00:02:35.073402 sudo[2737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 8 00:02:35.577454 sudo[2737]: pam_unix(sudo:session): session closed for user root May 8 00:02:35.721006 kubelet[2702]: I0508 00:02:35.720936 2702 apiserver.go:52] "Watching apiserver" May 8 00:02:35.727784 kubelet[2702]: I0508 00:02:35.727735 2702 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:02:36.287483 kubelet[2702]: I0508 00:02:36.287293 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.287265454 podStartE2EDuration="2.287265454s" podCreationTimestamp="2025-05-08 00:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:02:36.2800283 +0000 UTC m=+1.631621237" watchObservedRunningTime="2025-05-08 00:02:36.287265454 +0000 UTC m=+1.638858391" May 8 00:02:36.299026 kubelet[2702]: I0508 00:02:36.298944 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.29891726 podStartE2EDuration="2.29891726s" podCreationTimestamp="2025-05-08 00:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:02:36.289308366 +0000 UTC m=+1.640901323" watchObservedRunningTime="2025-05-08 00:02:36.29891726 +0000 UTC m=+1.650510197" May 8 00:02:36.308205 kubelet[2702]: I0508 00:02:36.308109 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.308085792 podStartE2EDuration="2.308085792s" podCreationTimestamp="2025-05-08 00:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:02:36.299367199 +0000 UTC m=+1.650960126" watchObservedRunningTime="2025-05-08 00:02:36.308085792 +0000 UTC m=+1.659678729" May 8 00:02:37.216930 sudo[1707]: pam_unix(sudo:session): session closed for user root May 8 00:02:37.218724 sshd[1706]: Connection closed by 10.0.0.1 port 37982 May 8 00:02:37.219343 sshd-session[1703]: pam_unix(sshd:session): session closed for user core May 8 00:02:37.224526 systemd[1]: sshd@6-10.0.0.29:22-10.0.0.1:37982.service: Deactivated successfully. May 8 00:02:37.227036 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:02:37.227279 systemd[1]: session-7.scope: Consumed 5.837s CPU time, 280.2M memory peak. May 8 00:02:37.228622 systemd-logind[1506]: Session 7 logged out. Waiting for processes to exit. May 8 00:02:37.229588 systemd-logind[1506]: Removed session 7. May 8 00:02:42.984239 update_engine[1507]: I20250508 00:02:42.984122 1507 update_attempter.cc:509] Updating boot flags... May 8 00:02:43.017361 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2786) May 8 00:02:43.059519 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2785) May 8 00:02:43.102419 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2785) May 8 00:02:48.198840 kubelet[2702]: I0508 00:02:48.198763 2702 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:02:48.199424 kubelet[2702]: I0508 00:02:48.199353 2702 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:02:48.199471 containerd[1519]: time="2025-05-08T00:02:48.199140907Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:02:48.978492 kubelet[2702]: I0508 00:02:48.978423 2702 topology_manager.go:215] "Topology Admit Handler" podUID="abf6f3f8-407b-4995-9062-064d854c8d13" podNamespace="kube-system" podName="cilium-fx48x" May 8 00:02:48.983259 kubelet[2702]: I0508 00:02:48.983200 2702 topology_manager.go:215] "Topology Admit Handler" podUID="79e8184e-6e80-438e-a700-514717eb122c" podNamespace="kube-system" podName="kube-proxy-79j9n" May 8 00:02:48.989850 systemd[1]: Created slice kubepods-burstable-podabf6f3f8_407b_4995_9062_064d854c8d13.slice - libcontainer container kubepods-burstable-podabf6f3f8_407b_4995_9062_064d854c8d13.slice. May 8 00:02:48.995967 systemd[1]: Created slice kubepods-besteffort-pod79e8184e_6e80_438e_a700_514717eb122c.slice - libcontainer container kubepods-besteffort-pod79e8184e_6e80_438e_a700_514717eb122c.slice. May 8 00:02:49.012441 kubelet[2702]: I0508 00:02:49.012351 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79e8184e-6e80-438e-a700-514717eb122c-xtables-lock\") pod \"kube-proxy-79j9n\" (UID: \"79e8184e-6e80-438e-a700-514717eb122c\") " pod="kube-system/kube-proxy-79j9n" May 8 00:02:49.012441 kubelet[2702]: I0508 00:02:49.012411 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-bpf-maps\") pod \"cilium-fx48x\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " pod="kube-system/cilium-fx48x" May 8 00:02:49.012619 kubelet[2702]: I0508 00:02:49.012433 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-hostproc\") pod \"cilium-fx48x\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " pod="kube-system/cilium-fx48x" May 8 00:02:49.012619 kubelet[2702]: I0508 00:02:49.012480 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-cilium-cgroup\") pod \"cilium-fx48x\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " pod="kube-system/cilium-fx48x" May 8 00:02:49.012619 kubelet[2702]: I0508 00:02:49.012499 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/abf6f3f8-407b-4995-9062-064d854c8d13-cilium-config-path\") pod \"cilium-fx48x\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " pod="kube-system/cilium-fx48x" May 8 00:02:49.012619 kubelet[2702]: I0508 00:02:49.012520 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67mxr\" (UniqueName: \"kubernetes.io/projected/abf6f3f8-407b-4995-9062-064d854c8d13-kube-api-access-67mxr\") pod \"cilium-fx48x\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " pod="kube-system/cilium-fx48x" May 8 00:02:49.012619 kubelet[2702]: I0508 00:02:49.012542 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-lib-modules\") pod \"cilium-fx48x\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " pod="kube-system/cilium-fx48x" May 8 00:02:49.012619 kubelet[2702]: I0508 00:02:49.012561 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-host-proc-sys-net\") pod \"cilium-fx48x\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " pod="kube-system/cilium-fx48x" May 8 00:02:49.012768 kubelet[2702]: I0508 00:02:49.012667 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56zgh\" (UniqueName: \"kubernetes.io/projected/79e8184e-6e80-438e-a700-514717eb122c-kube-api-access-56zgh\") pod \"kube-proxy-79j9n\" (UID: \"79e8184e-6e80-438e-a700-514717eb122c\") " pod="kube-system/kube-proxy-79j9n" May 8 00:02:49.012768 kubelet[2702]: I0508 00:02:49.012761 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-cni-path\") pod \"cilium-fx48x\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " pod="kube-system/cilium-fx48x" May 8 00:02:49.012823 kubelet[2702]: I0508 00:02:49.012780 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-etc-cni-netd\") pod \"cilium-fx48x\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " pod="kube-system/cilium-fx48x" May 8 00:02:49.012823 kubelet[2702]: I0508 00:02:49.012801 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-xtables-lock\") pod \"cilium-fx48x\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " pod="kube-system/cilium-fx48x" May 8 00:02:49.012823 kubelet[2702]: I0508 00:02:49.012816 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-host-proc-sys-kernel\") pod \"cilium-fx48x\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " pod="kube-system/cilium-fx48x" May 8 00:02:49.012894 kubelet[2702]: I0508 00:02:49.012830 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-cilium-run\") pod \"cilium-fx48x\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " pod="kube-system/cilium-fx48x" May 8 00:02:49.012894 kubelet[2702]: I0508 00:02:49.012857 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79e8184e-6e80-438e-a700-514717eb122c-lib-modules\") pod \"kube-proxy-79j9n\" (UID: \"79e8184e-6e80-438e-a700-514717eb122c\") " pod="kube-system/kube-proxy-79j9n" May 8 00:02:49.012894 kubelet[2702]: I0508 00:02:49.012885 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/abf6f3f8-407b-4995-9062-064d854c8d13-hubble-tls\") pod \"cilium-fx48x\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " pod="kube-system/cilium-fx48x" May 8 00:02:49.012965 kubelet[2702]: I0508 00:02:49.012912 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/abf6f3f8-407b-4995-9062-064d854c8d13-clustermesh-secrets\") pod \"cilium-fx48x\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " pod="kube-system/cilium-fx48x" May 8 00:02:49.012965 kubelet[2702]: I0508 00:02:49.012929 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/79e8184e-6e80-438e-a700-514717eb122c-kube-proxy\") pod \"kube-proxy-79j9n\" (UID: \"79e8184e-6e80-438e-a700-514717eb122c\") " pod="kube-system/kube-proxy-79j9n" May 8 00:02:49.161336 kubelet[2702]: E0508 00:02:49.161260 2702 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 8 00:02:49.161336 kubelet[2702]: E0508 00:02:49.161314 2702 projected.go:200] Error preparing data for projected volume kube-api-access-56zgh for pod kube-system/kube-proxy-79j9n: configmap "kube-root-ca.crt" not found May 8 00:02:49.161548 kubelet[2702]: E0508 00:02:49.161418 2702 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/79e8184e-6e80-438e-a700-514717eb122c-kube-api-access-56zgh podName:79e8184e-6e80-438e-a700-514717eb122c nodeName:}" failed. No retries permitted until 2025-05-08 00:02:49.661382143 +0000 UTC m=+15.012975080 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-56zgh" (UniqueName: "kubernetes.io/projected/79e8184e-6e80-438e-a700-514717eb122c-kube-api-access-56zgh") pod "kube-proxy-79j9n" (UID: "79e8184e-6e80-438e-a700-514717eb122c") : configmap "kube-root-ca.crt" not found May 8 00:02:49.162644 kubelet[2702]: E0508 00:02:49.162589 2702 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 8 00:02:49.162644 kubelet[2702]: E0508 00:02:49.162638 2702 projected.go:200] Error preparing data for projected volume kube-api-access-67mxr for pod kube-system/cilium-fx48x: configmap "kube-root-ca.crt" not found May 8 00:02:49.162932 kubelet[2702]: E0508 00:02:49.162726 2702 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/abf6f3f8-407b-4995-9062-064d854c8d13-kube-api-access-67mxr podName:abf6f3f8-407b-4995-9062-064d854c8d13 nodeName:}" failed. No retries permitted until 2025-05-08 00:02:49.662700365 +0000 UTC m=+15.014293302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-67mxr" (UniqueName: "kubernetes.io/projected/abf6f3f8-407b-4995-9062-064d854c8d13-kube-api-access-67mxr") pod "cilium-fx48x" (UID: "abf6f3f8-407b-4995-9062-064d854c8d13") : configmap "kube-root-ca.crt" not found May 8 00:02:49.173555 kubelet[2702]: I0508 00:02:49.173493 2702 topology_manager.go:215] "Topology Admit Handler" podUID="c53eb686-1cba-48e9-818e-de9bcf865851" podNamespace="kube-system" podName="cilium-operator-599987898-v8sz4" May 8 00:02:49.186176 systemd[1]: Created slice kubepods-besteffort-podc53eb686_1cba_48e9_818e_de9bcf865851.slice - libcontainer container kubepods-besteffort-podc53eb686_1cba_48e9_818e_de9bcf865851.slice. May 8 00:02:49.214896 kubelet[2702]: I0508 00:02:49.214833 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c53eb686-1cba-48e9-818e-de9bcf865851-cilium-config-path\") pod \"cilium-operator-599987898-v8sz4\" (UID: \"c53eb686-1cba-48e9-818e-de9bcf865851\") " pod="kube-system/cilium-operator-599987898-v8sz4" May 8 00:02:49.215397 kubelet[2702]: I0508 00:02:49.214916 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5wcv\" (UniqueName: \"kubernetes.io/projected/c53eb686-1cba-48e9-818e-de9bcf865851-kube-api-access-l5wcv\") pod \"cilium-operator-599987898-v8sz4\" (UID: \"c53eb686-1cba-48e9-818e-de9bcf865851\") " pod="kube-system/cilium-operator-599987898-v8sz4" May 8 00:02:49.494932 containerd[1519]: time="2025-05-08T00:02:49.494878572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-v8sz4,Uid:c53eb686-1cba-48e9-818e-de9bcf865851,Namespace:kube-system,Attempt:0,}" May 8 00:02:49.523019 containerd[1519]: time="2025-05-08T00:02:49.522897939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:02:49.523019 containerd[1519]: time="2025-05-08T00:02:49.522986487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:02:49.523019 containerd[1519]: time="2025-05-08T00:02:49.523002837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:02:49.523248 containerd[1519]: time="2025-05-08T00:02:49.523108557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:02:49.544518 systemd[1]: Started cri-containerd-e515113bb14297443ba1f8368a0672ccc61949a63de5c6771cb27bb2221b89ac.scope - libcontainer container e515113bb14297443ba1f8368a0672ccc61949a63de5c6771cb27bb2221b89ac. May 8 00:02:49.584670 containerd[1519]: time="2025-05-08T00:02:49.584624965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-v8sz4,Uid:c53eb686-1cba-48e9-818e-de9bcf865851,Namespace:kube-system,Attempt:0,} returns sandbox id \"e515113bb14297443ba1f8368a0672ccc61949a63de5c6771cb27bb2221b89ac\"" May 8 00:02:49.586852 containerd[1519]: time="2025-05-08T00:02:49.586813683Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 00:02:49.895510 containerd[1519]: time="2025-05-08T00:02:49.895339459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fx48x,Uid:abf6f3f8-407b-4995-9062-064d854c8d13,Namespace:kube-system,Attempt:0,}" May 8 00:02:49.907629 containerd[1519]: time="2025-05-08T00:02:49.907564338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-79j9n,Uid:79e8184e-6e80-438e-a700-514717eb122c,Namespace:kube-system,Attempt:0,}" May 8 00:02:49.927819 containerd[1519]: time="2025-05-08T00:02:49.927642383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:02:49.927819 containerd[1519]: time="2025-05-08T00:02:49.927745097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:02:49.927819 containerd[1519]: time="2025-05-08T00:02:49.927768581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:02:49.928013 containerd[1519]: time="2025-05-08T00:02:49.927929456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:02:49.936092 containerd[1519]: time="2025-05-08T00:02:49.934659438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:02:49.936092 containerd[1519]: time="2025-05-08T00:02:49.934746131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:02:49.936092 containerd[1519]: time="2025-05-08T00:02:49.934764796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:02:49.936092 containerd[1519]: time="2025-05-08T00:02:49.934881597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:02:49.951496 systemd[1]: Started cri-containerd-df4ea8b7598c8c5d125f1dc94652a72eadf3f926b3337a53e1fb8de7b26f25b8.scope - libcontainer container df4ea8b7598c8c5d125f1dc94652a72eadf3f926b3337a53e1fb8de7b26f25b8. May 8 00:02:49.956909 systemd[1]: Started cri-containerd-b71d677c6feab7b2d4257619bb4769130a0b2a97b0288e364aed514295f630f0.scope - libcontainer container b71d677c6feab7b2d4257619bb4769130a0b2a97b0288e364aed514295f630f0. May 8 00:02:49.981366 containerd[1519]: time="2025-05-08T00:02:49.981305025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fx48x,Uid:abf6f3f8-407b-4995-9062-064d854c8d13,Namespace:kube-system,Attempt:0,} returns sandbox id \"df4ea8b7598c8c5d125f1dc94652a72eadf3f926b3337a53e1fb8de7b26f25b8\"" May 8 00:02:49.988171 containerd[1519]: time="2025-05-08T00:02:49.988114346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-79j9n,Uid:79e8184e-6e80-438e-a700-514717eb122c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b71d677c6feab7b2d4257619bb4769130a0b2a97b0288e364aed514295f630f0\"" May 8 00:02:49.991617 containerd[1519]: time="2025-05-08T00:02:49.991589326Z" level=info msg="CreateContainer within sandbox \"b71d677c6feab7b2d4257619bb4769130a0b2a97b0288e364aed514295f630f0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:02:50.019277 containerd[1519]: time="2025-05-08T00:02:50.019192027Z" level=info msg="CreateContainer within sandbox \"b71d677c6feab7b2d4257619bb4769130a0b2a97b0288e364aed514295f630f0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7cd80d2c8bf330215904121b32ab4899ce66f0c8c8dbbff05cd3220b704aedad\"" May 8 00:02:50.020096 containerd[1519]: time="2025-05-08T00:02:50.020026424Z" level=info msg="StartContainer for \"7cd80d2c8bf330215904121b32ab4899ce66f0c8c8dbbff05cd3220b704aedad\"" May 8 00:02:50.050480 systemd[1]: Started cri-containerd-7cd80d2c8bf330215904121b32ab4899ce66f0c8c8dbbff05cd3220b704aedad.scope - libcontainer container 7cd80d2c8bf330215904121b32ab4899ce66f0c8c8dbbff05cd3220b704aedad. May 8 00:02:50.091001 containerd[1519]: time="2025-05-08T00:02:50.090941673Z" level=info msg="StartContainer for \"7cd80d2c8bf330215904121b32ab4899ce66f0c8c8dbbff05cd3220b704aedad\" returns successfully" May 8 00:02:51.035887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount36601036.mount: Deactivated successfully. May 8 00:02:52.453109 containerd[1519]: time="2025-05-08T00:02:52.453037155Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:52.454175 containerd[1519]: time="2025-05-08T00:02:52.454065266Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 8 00:02:52.455351 containerd[1519]: time="2025-05-08T00:02:52.455291661Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:02:52.456813 containerd[1519]: time="2025-05-08T00:02:52.456775061Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.869918668s" May 8 00:02:52.456813 containerd[1519]: time="2025-05-08T00:02:52.456806801Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 8 00:02:52.458027 containerd[1519]: time="2025-05-08T00:02:52.457994804Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 00:02:52.459585 containerd[1519]: time="2025-05-08T00:02:52.459544880Z" level=info msg="CreateContainer within sandbox \"e515113bb14297443ba1f8368a0672ccc61949a63de5c6771cb27bb2221b89ac\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 00:02:52.488700 containerd[1519]: time="2025-05-08T00:02:52.488645644Z" level=info msg="CreateContainer within sandbox \"e515113bb14297443ba1f8368a0672ccc61949a63de5c6771cb27bb2221b89ac\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c08dff5eb4127bf6d05de59340e295d491b325a611ab1193a00a26a035a7c6a9\"" May 8 00:02:52.489338 containerd[1519]: time="2025-05-08T00:02:52.489276335Z" level=info msg="StartContainer for \"c08dff5eb4127bf6d05de59340e295d491b325a611ab1193a00a26a035a7c6a9\"" May 8 00:02:52.527479 systemd[1]: Started cri-containerd-c08dff5eb4127bf6d05de59340e295d491b325a611ab1193a00a26a035a7c6a9.scope - libcontainer container c08dff5eb4127bf6d05de59340e295d491b325a611ab1193a00a26a035a7c6a9. May 8 00:02:52.558644 containerd[1519]: time="2025-05-08T00:02:52.558551747Z" level=info msg="StartContainer for \"c08dff5eb4127bf6d05de59340e295d491b325a611ab1193a00a26a035a7c6a9\" returns successfully" May 8 00:02:53.499650 kubelet[2702]: I0508 00:02:53.499572 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-79j9n" podStartSLOduration=5.499550375 podStartE2EDuration="5.499550375s" podCreationTimestamp="2025-05-08 00:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:02:50.298018058 +0000 UTC m=+15.649611015" watchObservedRunningTime="2025-05-08 00:02:53.499550375 +0000 UTC m=+18.851143322" May 8 00:03:00.356571 systemd[1]: Started sshd@7-10.0.0.29:22-10.0.0.1:49676.service - OpenSSH per-connection server daemon (10.0.0.1:49676). May 8 00:03:00.478693 sshd[3143]: Accepted publickey for core from 10.0.0.1 port 49676 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:03:00.480498 sshd-session[3143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:03:00.489222 systemd-logind[1506]: New session 8 of user core. May 8 00:03:00.498637 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:03:00.698638 sshd[3145]: Connection closed by 10.0.0.1 port 49676 May 8 00:03:00.699074 sshd-session[3143]: pam_unix(sshd:session): session closed for user core May 8 00:03:00.704088 systemd[1]: sshd@7-10.0.0.29:22-10.0.0.1:49676.service: Deactivated successfully. May 8 00:03:00.707713 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:03:00.709864 systemd-logind[1506]: Session 8 logged out. Waiting for processes to exit. May 8 00:03:00.711386 systemd-logind[1506]: Removed session 8. May 8 00:03:01.644315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4227927758.mount: Deactivated successfully. May 8 00:03:05.711026 systemd[1]: Started sshd@8-10.0.0.29:22-10.0.0.1:49680.service - OpenSSH per-connection server daemon (10.0.0.1:49680). May 8 00:03:06.077563 sshd[3180]: Accepted publickey for core from 10.0.0.1 port 49680 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:03:06.079298 sshd-session[3180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:03:06.083924 systemd-logind[1506]: New session 9 of user core. May 8 00:03:06.091471 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:03:06.248633 sshd[3182]: Connection closed by 10.0.0.1 port 49680 May 8 00:03:06.249107 sshd-session[3180]: pam_unix(sshd:session): session closed for user core May 8 00:03:06.253978 systemd[1]: sshd@8-10.0.0.29:22-10.0.0.1:49680.service: Deactivated successfully. May 8 00:03:06.256530 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:03:06.257346 systemd-logind[1506]: Session 9 logged out. Waiting for processes to exit. May 8 00:03:06.258338 systemd-logind[1506]: Removed session 9. May 8 00:03:06.307246 containerd[1519]: time="2025-05-08T00:03:06.307162300Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:06.309213 containerd[1519]: time="2025-05-08T00:03:06.309104432Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 8 00:03:06.310464 containerd[1519]: time="2025-05-08T00:03:06.310425126Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:03:06.312885 containerd[1519]: time="2025-05-08T00:03:06.312507932Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.854475899s" May 8 00:03:06.312885 containerd[1519]: time="2025-05-08T00:03:06.312568707Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 8 00:03:06.320526 containerd[1519]: time="2025-05-08T00:03:06.320476556Z" level=info msg="CreateContainer within sandbox \"df4ea8b7598c8c5d125f1dc94652a72eadf3f926b3337a53e1fb8de7b26f25b8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:03:06.336550 containerd[1519]: time="2025-05-08T00:03:06.336431837Z" level=info msg="CreateContainer within sandbox \"df4ea8b7598c8c5d125f1dc94652a72eadf3f926b3337a53e1fb8de7b26f25b8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b08a1b288e348c482c5ea0502db8078de1868f2979d58157a24095f8f328f34c\"" May 8 00:03:06.337136 containerd[1519]: time="2025-05-08T00:03:06.337056251Z" level=info msg="StartContainer for \"b08a1b288e348c482c5ea0502db8078de1868f2979d58157a24095f8f328f34c\"" May 8 00:03:06.361959 systemd[1]: run-containerd-runc-k8s.io-b08a1b288e348c482c5ea0502db8078de1868f2979d58157a24095f8f328f34c-runc.MEEzti.mount: Deactivated successfully. May 8 00:03:06.371530 systemd[1]: Started cri-containerd-b08a1b288e348c482c5ea0502db8078de1868f2979d58157a24095f8f328f34c.scope - libcontainer container b08a1b288e348c482c5ea0502db8078de1868f2979d58157a24095f8f328f34c. May 8 00:03:06.429743 systemd[1]: cri-containerd-b08a1b288e348c482c5ea0502db8078de1868f2979d58157a24095f8f328f34c.scope: Deactivated successfully. May 8 00:03:06.601695 containerd[1519]: time="2025-05-08T00:03:06.601543475Z" level=info msg="StartContainer for \"b08a1b288e348c482c5ea0502db8078de1868f2979d58157a24095f8f328f34c\" returns successfully" May 8 00:03:06.989159 containerd[1519]: time="2025-05-08T00:03:06.989067482Z" level=info msg="shim disconnected" id=b08a1b288e348c482c5ea0502db8078de1868f2979d58157a24095f8f328f34c namespace=k8s.io May 8 00:03:06.989159 containerd[1519]: time="2025-05-08T00:03:06.989151910Z" level=warning msg="cleaning up after shim disconnected" id=b08a1b288e348c482c5ea0502db8078de1868f2979d58157a24095f8f328f34c namespace=k8s.io May 8 00:03:06.989159 containerd[1519]: time="2025-05-08T00:03:06.989160517Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:03:07.326943 containerd[1519]: time="2025-05-08T00:03:07.326715081Z" level=info msg="CreateContainer within sandbox \"df4ea8b7598c8c5d125f1dc94652a72eadf3f926b3337a53e1fb8de7b26f25b8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:03:07.331821 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b08a1b288e348c482c5ea0502db8078de1868f2979d58157a24095f8f328f34c-rootfs.mount: Deactivated successfully. May 8 00:03:07.457176 kubelet[2702]: I0508 00:03:07.457102 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-v8sz4" podStartSLOduration=15.58569665 podStartE2EDuration="18.457082449s" podCreationTimestamp="2025-05-08 00:02:49 +0000 UTC" firstStartedPulling="2025-05-08 00:02:49.586407775 +0000 UTC m=+14.938000702" lastFinishedPulling="2025-05-08 00:02:52.457793564 +0000 UTC m=+17.809386501" observedRunningTime="2025-05-08 00:02:53.499725495 +0000 UTC m=+18.851318432" watchObservedRunningTime="2025-05-08 00:03:07.457082449 +0000 UTC m=+32.808675386" May 8 00:03:07.483351 containerd[1519]: time="2025-05-08T00:03:07.483283488Z" level=info msg="CreateContainer within sandbox \"df4ea8b7598c8c5d125f1dc94652a72eadf3f926b3337a53e1fb8de7b26f25b8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dcf2b257c0abfb177a7d78dfe376d08bbec19479338c0ece750725f9d2de65a1\"" May 8 00:03:07.483860 containerd[1519]: time="2025-05-08T00:03:07.483798868Z" level=info msg="StartContainer for \"dcf2b257c0abfb177a7d78dfe376d08bbec19479338c0ece750725f9d2de65a1\"" May 8 00:03:07.513469 systemd[1]: Started cri-containerd-dcf2b257c0abfb177a7d78dfe376d08bbec19479338c0ece750725f9d2de65a1.scope - libcontainer container dcf2b257c0abfb177a7d78dfe376d08bbec19479338c0ece750725f9d2de65a1. May 8 00:03:07.539973 containerd[1519]: time="2025-05-08T00:03:07.539913877Z" level=info msg="StartContainer for \"dcf2b257c0abfb177a7d78dfe376d08bbec19479338c0ece750725f9d2de65a1\" returns successfully" May 8 00:03:07.554653 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:03:07.555123 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:03:07.555442 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 8 00:03:07.564001 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:03:07.564366 systemd[1]: cri-containerd-dcf2b257c0abfb177a7d78dfe376d08bbec19479338c0ece750725f9d2de65a1.scope: Deactivated successfully. May 8 00:03:07.582205 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:03:07.584830 containerd[1519]: time="2025-05-08T00:03:07.584767093Z" level=info msg="shim disconnected" id=dcf2b257c0abfb177a7d78dfe376d08bbec19479338c0ece750725f9d2de65a1 namespace=k8s.io May 8 00:03:07.584932 containerd[1519]: time="2025-05-08T00:03:07.584831624Z" level=warning msg="cleaning up after shim disconnected" id=dcf2b257c0abfb177a7d78dfe376d08bbec19479338c0ece750725f9d2de65a1 namespace=k8s.io May 8 00:03:07.584932 containerd[1519]: time="2025-05-08T00:03:07.584845019Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:03:08.331345 containerd[1519]: time="2025-05-08T00:03:08.331279920Z" level=info msg="CreateContainer within sandbox \"df4ea8b7598c8c5d125f1dc94652a72eadf3f926b3337a53e1fb8de7b26f25b8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:03:08.331361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcf2b257c0abfb177a7d78dfe376d08bbec19479338c0ece750725f9d2de65a1-rootfs.mount: Deactivated successfully. May 8 00:03:08.589241 containerd[1519]: time="2025-05-08T00:03:08.589100080Z" level=info msg="CreateContainer within sandbox \"df4ea8b7598c8c5d125f1dc94652a72eadf3f926b3337a53e1fb8de7b26f25b8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"eba8781ba50391e2070b30977e02b18c9630e98783ea27e40b71cf6e0a84345f\"" May 8 00:03:08.590050 containerd[1519]: time="2025-05-08T00:03:08.590004581Z" level=info msg="StartContainer for \"eba8781ba50391e2070b30977e02b18c9630e98783ea27e40b71cf6e0a84345f\"" May 8 00:03:08.629717 systemd[1]: Started cri-containerd-eba8781ba50391e2070b30977e02b18c9630e98783ea27e40b71cf6e0a84345f.scope - libcontainer container eba8781ba50391e2070b30977e02b18c9630e98783ea27e40b71cf6e0a84345f. May 8 00:03:08.672678 containerd[1519]: time="2025-05-08T00:03:08.672606823Z" level=info msg="StartContainer for \"eba8781ba50391e2070b30977e02b18c9630e98783ea27e40b71cf6e0a84345f\" returns successfully" May 8 00:03:08.673199 systemd[1]: cri-containerd-eba8781ba50391e2070b30977e02b18c9630e98783ea27e40b71cf6e0a84345f.scope: Deactivated successfully. May 8 00:03:08.707385 containerd[1519]: time="2025-05-08T00:03:08.706071859Z" level=info msg="shim disconnected" id=eba8781ba50391e2070b30977e02b18c9630e98783ea27e40b71cf6e0a84345f namespace=k8s.io May 8 00:03:08.707385 containerd[1519]: time="2025-05-08T00:03:08.706160385Z" level=warning msg="cleaning up after shim disconnected" id=eba8781ba50391e2070b30977e02b18c9630e98783ea27e40b71cf6e0a84345f namespace=k8s.io May 8 00:03:08.707385 containerd[1519]: time="2025-05-08T00:03:08.706172097Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:03:09.332105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eba8781ba50391e2070b30977e02b18c9630e98783ea27e40b71cf6e0a84345f-rootfs.mount: Deactivated successfully. May 8 00:03:09.334273 containerd[1519]: time="2025-05-08T00:03:09.334220485Z" level=info msg="CreateContainer within sandbox \"df4ea8b7598c8c5d125f1dc94652a72eadf3f926b3337a53e1fb8de7b26f25b8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:03:09.359972 containerd[1519]: time="2025-05-08T00:03:09.359885874Z" level=info msg="CreateContainer within sandbox \"df4ea8b7598c8c5d125f1dc94652a72eadf3f926b3337a53e1fb8de7b26f25b8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3e385485feaaf088edb91506fbfd7d4b9a128501a68169c0701022267b9dfca7\"" May 8 00:03:09.360703 containerd[1519]: time="2025-05-08T00:03:09.360652434Z" level=info msg="StartContainer for \"3e385485feaaf088edb91506fbfd7d4b9a128501a68169c0701022267b9dfca7\"" May 8 00:03:09.397481 systemd[1]: Started cri-containerd-3e385485feaaf088edb91506fbfd7d4b9a128501a68169c0701022267b9dfca7.scope - libcontainer container 3e385485feaaf088edb91506fbfd7d4b9a128501a68169c0701022267b9dfca7. May 8 00:03:09.427644 systemd[1]: cri-containerd-3e385485feaaf088edb91506fbfd7d4b9a128501a68169c0701022267b9dfca7.scope: Deactivated successfully. May 8 00:03:09.432288 containerd[1519]: time="2025-05-08T00:03:09.432248989Z" level=info msg="StartContainer for \"3e385485feaaf088edb91506fbfd7d4b9a128501a68169c0701022267b9dfca7\" returns successfully" May 8 00:03:09.465731 containerd[1519]: time="2025-05-08T00:03:09.465632803Z" level=info msg="shim disconnected" id=3e385485feaaf088edb91506fbfd7d4b9a128501a68169c0701022267b9dfca7 namespace=k8s.io May 8 00:03:09.465731 containerd[1519]: time="2025-05-08T00:03:09.465702965Z" level=warning msg="cleaning up after shim disconnected" id=3e385485feaaf088edb91506fbfd7d4b9a128501a68169c0701022267b9dfca7 namespace=k8s.io May 8 00:03:09.465731 containerd[1519]: time="2025-05-08T00:03:09.465715188Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:03:10.331719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e385485feaaf088edb91506fbfd7d4b9a128501a68169c0701022267b9dfca7-rootfs.mount: Deactivated successfully. May 8 00:03:10.359488 containerd[1519]: time="2025-05-08T00:03:10.355236267Z" level=info msg="CreateContainer within sandbox \"df4ea8b7598c8c5d125f1dc94652a72eadf3f926b3337a53e1fb8de7b26f25b8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:03:10.525790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1408973613.mount: Deactivated successfully. May 8 00:03:10.610691 containerd[1519]: time="2025-05-08T00:03:10.610482193Z" level=info msg="CreateContainer within sandbox \"df4ea8b7598c8c5d125f1dc94652a72eadf3f926b3337a53e1fb8de7b26f25b8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"826720e7b9939ba78bc2b7b29d4a9b050f7c18a2917411378b1ba822e4f462b8\"" May 8 00:03:10.611443 containerd[1519]: time="2025-05-08T00:03:10.611375842Z" level=info msg="StartContainer for \"826720e7b9939ba78bc2b7b29d4a9b050f7c18a2917411378b1ba822e4f462b8\"" May 8 00:03:10.693152 systemd[1]: Started cri-containerd-826720e7b9939ba78bc2b7b29d4a9b050f7c18a2917411378b1ba822e4f462b8.scope - libcontainer container 826720e7b9939ba78bc2b7b29d4a9b050f7c18a2917411378b1ba822e4f462b8. May 8 00:03:10.801316 containerd[1519]: time="2025-05-08T00:03:10.801240211Z" level=info msg="StartContainer for \"826720e7b9939ba78bc2b7b29d4a9b050f7c18a2917411378b1ba822e4f462b8\" returns successfully" May 8 00:03:10.964283 kubelet[2702]: I0508 00:03:10.963724 2702 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 00:03:11.104955 kubelet[2702]: I0508 00:03:11.104840 2702 topology_manager.go:215] "Topology Admit Handler" podUID="0b6f94de-cd55-40bf-87cb-49c23a1f8b88" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6w7qc" May 8 00:03:11.110751 kubelet[2702]: I0508 00:03:11.109678 2702 topology_manager.go:215] "Topology Admit Handler" podUID="8db93f0f-ebaf-4c8b-9939-e7bd8bd63966" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gfl9f" May 8 00:03:11.134986 systemd[1]: Created slice kubepods-burstable-pod0b6f94de_cd55_40bf_87cb_49c23a1f8b88.slice - libcontainer container kubepods-burstable-pod0b6f94de_cd55_40bf_87cb_49c23a1f8b88.slice. May 8 00:03:11.155774 systemd[1]: Created slice kubepods-burstable-pod8db93f0f_ebaf_4c8b_9939_e7bd8bd63966.slice - libcontainer container kubepods-burstable-pod8db93f0f_ebaf_4c8b_9939_e7bd8bd63966.slice. May 8 00:03:11.292224 kubelet[2702]: I0508 00:03:11.272363 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv92x\" (UniqueName: \"kubernetes.io/projected/8db93f0f-ebaf-4c8b-9939-e7bd8bd63966-kube-api-access-cv92x\") pod \"coredns-7db6d8ff4d-gfl9f\" (UID: \"8db93f0f-ebaf-4c8b-9939-e7bd8bd63966\") " pod="kube-system/coredns-7db6d8ff4d-gfl9f" May 8 00:03:11.292224 kubelet[2702]: I0508 00:03:11.272430 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7rqf\" (UniqueName: \"kubernetes.io/projected/0b6f94de-cd55-40bf-87cb-49c23a1f8b88-kube-api-access-k7rqf\") pod \"coredns-7db6d8ff4d-6w7qc\" (UID: \"0b6f94de-cd55-40bf-87cb-49c23a1f8b88\") " pod="kube-system/coredns-7db6d8ff4d-6w7qc" May 8 00:03:11.292224 kubelet[2702]: I0508 00:03:11.272452 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b6f94de-cd55-40bf-87cb-49c23a1f8b88-config-volume\") pod \"coredns-7db6d8ff4d-6w7qc\" (UID: \"0b6f94de-cd55-40bf-87cb-49c23a1f8b88\") " pod="kube-system/coredns-7db6d8ff4d-6w7qc" May 8 00:03:11.292224 kubelet[2702]: I0508 00:03:11.272474 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8db93f0f-ebaf-4c8b-9939-e7bd8bd63966-config-volume\") pod \"coredns-7db6d8ff4d-gfl9f\" (UID: \"8db93f0f-ebaf-4c8b-9939-e7bd8bd63966\") " pod="kube-system/coredns-7db6d8ff4d-gfl9f" May 8 00:03:11.314706 systemd[1]: Started sshd@9-10.0.0.29:22-10.0.0.1:41786.service - OpenSSH per-connection server daemon (10.0.0.1:41786). May 8 00:03:11.393566 sshd[3518]: Accepted publickey for core from 10.0.0.1 port 41786 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:03:11.395933 sshd-session[3518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:03:11.402122 systemd-logind[1506]: New session 10 of user core. May 8 00:03:11.405523 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:03:11.512920 kubelet[2702]: I0508 00:03:11.512844 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fx48x" podStartSLOduration=7.181595976 podStartE2EDuration="23.512825527s" podCreationTimestamp="2025-05-08 00:02:48 +0000 UTC" firstStartedPulling="2025-05-08 00:02:49.982940978 +0000 UTC m=+15.334533915" lastFinishedPulling="2025-05-08 00:03:06.314170529 +0000 UTC m=+31.665763466" observedRunningTime="2025-05-08 00:03:11.512001198 +0000 UTC m=+36.863594136" watchObservedRunningTime="2025-05-08 00:03:11.512825527 +0000 UTC m=+36.864418464" May 8 00:03:11.554816 sshd[3520]: Connection closed by 10.0.0.1 port 41786 May 8 00:03:11.555104 sshd-session[3518]: pam_unix(sshd:session): session closed for user core May 8 00:03:11.559807 systemd[1]: sshd@9-10.0.0.29:22-10.0.0.1:41786.service: Deactivated successfully. May 8 00:03:11.562048 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:03:11.562821 systemd-logind[1506]: Session 10 logged out. Waiting for processes to exit. May 8 00:03:11.563838 systemd-logind[1506]: Removed session 10. May 8 00:03:11.751197 containerd[1519]: time="2025-05-08T00:03:11.751130078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6w7qc,Uid:0b6f94de-cd55-40bf-87cb-49c23a1f8b88,Namespace:kube-system,Attempt:0,}" May 8 00:03:11.759962 containerd[1519]: time="2025-05-08T00:03:11.759599833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gfl9f,Uid:8db93f0f-ebaf-4c8b-9939-e7bd8bd63966,Namespace:kube-system,Attempt:0,}" May 8 00:03:13.576601 systemd-networkd[1434]: cilium_host: Link UP May 8 00:03:13.576784 systemd-networkd[1434]: cilium_net: Link UP May 8 00:03:13.576984 systemd-networkd[1434]: cilium_net: Gained carrier May 8 00:03:13.577163 systemd-networkd[1434]: cilium_host: Gained carrier May 8 00:03:13.693480 systemd-networkd[1434]: cilium_vxlan: Link UP May 8 00:03:13.693493 systemd-networkd[1434]: cilium_vxlan: Gained carrier May 8 00:03:13.925481 systemd-networkd[1434]: cilium_net: Gained IPv6LL May 8 00:03:13.946361 kernel: NET: Registered PF_ALG protocol family May 8 00:03:14.413513 systemd-networkd[1434]: cilium_host: Gained IPv6LL May 8 00:03:14.696802 systemd-networkd[1434]: lxc_health: Link UP May 8 00:03:14.708201 systemd-networkd[1434]: lxc_health: Gained carrier May 8 00:03:15.222892 kernel: eth0: renamed from tmp77337 May 8 00:03:15.228379 kernel: eth0: renamed from tmp8190e May 8 00:03:15.242635 systemd-networkd[1434]: lxce94caf3ea822: Link UP May 8 00:03:15.250422 systemd-networkd[1434]: lxc96ba9e810e26: Link UP May 8 00:03:15.251452 systemd-networkd[1434]: lxce94caf3ea822: Gained carrier May 8 00:03:15.251689 systemd-networkd[1434]: lxc96ba9e810e26: Gained carrier May 8 00:03:15.309049 systemd-networkd[1434]: cilium_vxlan: Gained IPv6LL May 8 00:03:16.568385 systemd[1]: Started sshd@10-10.0.0.29:22-10.0.0.1:41790.service - OpenSSH per-connection server daemon (10.0.0.1:41790). May 8 00:03:16.610936 sshd[3972]: Accepted publickey for core from 10.0.0.1 port 41790 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:03:16.612768 sshd-session[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:03:16.618236 systemd-logind[1506]: New session 11 of user core. May 8 00:03:16.622485 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:03:16.716562 systemd-networkd[1434]: lxc_health: Gained IPv6LL May 8 00:03:16.752334 sshd[3974]: Connection closed by 10.0.0.1 port 41790 May 8 00:03:16.754831 sshd-session[3972]: pam_unix(sshd:session): session closed for user core May 8 00:03:16.759505 systemd[1]: sshd@10-10.0.0.29:22-10.0.0.1:41790.service: Deactivated successfully. May 8 00:03:16.762214 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:03:16.764654 systemd-logind[1506]: Session 11 logged out. Waiting for processes to exit. May 8 00:03:16.765858 systemd-logind[1506]: Removed session 11. May 8 00:03:17.037504 systemd-networkd[1434]: lxce94caf3ea822: Gained IPv6LL May 8 00:03:17.164574 systemd-networkd[1434]: lxc96ba9e810e26: Gained IPv6LL May 8 00:03:18.818116 containerd[1519]: time="2025-05-08T00:03:18.817976692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:03:18.818770 containerd[1519]: time="2025-05-08T00:03:18.818053516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:03:18.818770 containerd[1519]: time="2025-05-08T00:03:18.818212976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:18.819462 containerd[1519]: time="2025-05-08T00:03:18.819388362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:18.826819 containerd[1519]: time="2025-05-08T00:03:18.825082534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:03:18.826819 containerd[1519]: time="2025-05-08T00:03:18.825149089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:03:18.826819 containerd[1519]: time="2025-05-08T00:03:18.825168366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:18.826819 containerd[1519]: time="2025-05-08T00:03:18.825270708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:03:18.850563 systemd[1]: Started cri-containerd-77337ce05f5f49e89e71615195781964c376b14789b19ea7bc0cdd0820d4ff59.scope - libcontainer container 77337ce05f5f49e89e71615195781964c376b14789b19ea7bc0cdd0820d4ff59. May 8 00:03:18.858248 systemd[1]: Started cri-containerd-8190e1a4977f4bd3efd89095105abc4f6eba2ce8bdbd5aa089effddcc26d9b07.scope - libcontainer container 8190e1a4977f4bd3efd89095105abc4f6eba2ce8bdbd5aa089effddcc26d9b07. May 8 00:03:18.865722 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:03:18.872690 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:03:18.912050 containerd[1519]: time="2025-05-08T00:03:18.911994847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gfl9f,Uid:8db93f0f-ebaf-4c8b-9939-e7bd8bd63966,Namespace:kube-system,Attempt:0,} returns sandbox id \"77337ce05f5f49e89e71615195781964c376b14789b19ea7bc0cdd0820d4ff59\"" May 8 00:03:18.915691 containerd[1519]: time="2025-05-08T00:03:18.915485571Z" level=info msg="CreateContainer within sandbox \"77337ce05f5f49e89e71615195781964c376b14789b19ea7bc0cdd0820d4ff59\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:03:18.916053 containerd[1519]: time="2025-05-08T00:03:18.916019824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6w7qc,Uid:0b6f94de-cd55-40bf-87cb-49c23a1f8b88,Namespace:kube-system,Attempt:0,} returns sandbox id \"8190e1a4977f4bd3efd89095105abc4f6eba2ce8bdbd5aa089effddcc26d9b07\"" May 8 00:03:18.919513 containerd[1519]: time="2025-05-08T00:03:18.919484850Z" level=info msg="CreateContainer within sandbox \"8190e1a4977f4bd3efd89095105abc4f6eba2ce8bdbd5aa089effddcc26d9b07\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:03:19.107007 containerd[1519]: time="2025-05-08T00:03:19.106852700Z" level=info msg="CreateContainer within sandbox \"8190e1a4977f4bd3efd89095105abc4f6eba2ce8bdbd5aa089effddcc26d9b07\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e2db5e3657b490cb352f11805a80a5c351d0e07726112aa93da71170610a6f94\"" May 8 00:03:19.108356 containerd[1519]: time="2025-05-08T00:03:19.107555529Z" level=info msg="StartContainer for \"e2db5e3657b490cb352f11805a80a5c351d0e07726112aa93da71170610a6f94\"" May 8 00:03:19.138590 systemd[1]: Started cri-containerd-e2db5e3657b490cb352f11805a80a5c351d0e07726112aa93da71170610a6f94.scope - libcontainer container e2db5e3657b490cb352f11805a80a5c351d0e07726112aa93da71170610a6f94. May 8 00:03:19.146772 containerd[1519]: time="2025-05-08T00:03:19.146696748Z" level=info msg="CreateContainer within sandbox \"77337ce05f5f49e89e71615195781964c376b14789b19ea7bc0cdd0820d4ff59\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"731de68d158f3e1c4efe0e0a41482a11b743cc33eb85a29fa7462062dd9d530c\"" May 8 00:03:19.149413 containerd[1519]: time="2025-05-08T00:03:19.149360259Z" level=info msg="StartContainer for \"731de68d158f3e1c4efe0e0a41482a11b743cc33eb85a29fa7462062dd9d530c\"" May 8 00:03:19.190584 systemd[1]: Started cri-containerd-731de68d158f3e1c4efe0e0a41482a11b743cc33eb85a29fa7462062dd9d530c.scope - libcontainer container 731de68d158f3e1c4efe0e0a41482a11b743cc33eb85a29fa7462062dd9d530c. May 8 00:03:19.346104 containerd[1519]: time="2025-05-08T00:03:19.346045108Z" level=info msg="StartContainer for \"731de68d158f3e1c4efe0e0a41482a11b743cc33eb85a29fa7462062dd9d530c\" returns successfully" May 8 00:03:19.346295 containerd[1519]: time="2025-05-08T00:03:19.346045008Z" level=info msg="StartContainer for \"e2db5e3657b490cb352f11805a80a5c351d0e07726112aa93da71170610a6f94\" returns successfully" May 8 00:03:19.420362 kubelet[2702]: I0508 00:03:19.419911 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6w7qc" podStartSLOduration=30.419893311 podStartE2EDuration="30.419893311s" podCreationTimestamp="2025-05-08 00:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:03:19.41979202 +0000 UTC m=+44.771384957" watchObservedRunningTime="2025-05-08 00:03:19.419893311 +0000 UTC m=+44.771486248" May 8 00:03:19.825040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount50127090.mount: Deactivated successfully. May 8 00:03:20.418152 kubelet[2702]: I0508 00:03:20.417798 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gfl9f" podStartSLOduration=31.417777643 podStartE2EDuration="31.417777643s" podCreationTimestamp="2025-05-08 00:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:03:19.503180173 +0000 UTC m=+44.854773120" watchObservedRunningTime="2025-05-08 00:03:20.417777643 +0000 UTC m=+45.769370580" May 8 00:03:21.764889 systemd[1]: Started sshd@11-10.0.0.29:22-10.0.0.1:43270.service - OpenSSH per-connection server daemon (10.0.0.1:43270). May 8 00:03:21.809529 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 43270 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:03:21.811704 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:03:21.816201 systemd-logind[1506]: New session 12 of user core. May 8 00:03:21.823458 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:03:21.939888 sshd[4170]: Connection closed by 10.0.0.1 port 43270 May 8 00:03:21.940371 sshd-session[4168]: pam_unix(sshd:session): session closed for user core May 8 00:03:21.949392 systemd[1]: sshd@11-10.0.0.29:22-10.0.0.1:43270.service: Deactivated successfully. May 8 00:03:21.951509 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:03:21.953228 systemd-logind[1506]: Session 12 logged out. Waiting for processes to exit. May 8 00:03:21.966699 systemd[1]: Started sshd@12-10.0.0.29:22-10.0.0.1:43278.service - OpenSSH per-connection server daemon (10.0.0.1:43278). May 8 00:03:21.968378 systemd-logind[1506]: Removed session 12. May 8 00:03:22.003256 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 43278 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:03:22.004985 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:03:22.009718 systemd-logind[1506]: New session 13 of user core. May 8 00:03:22.020511 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:03:22.318200 sshd[4187]: Connection closed by 10.0.0.1 port 43278 May 8 00:03:22.318682 sshd-session[4184]: pam_unix(sshd:session): session closed for user core May 8 00:03:22.328285 systemd[1]: sshd@12-10.0.0.29:22-10.0.0.1:43278.service: Deactivated successfully. May 8 00:03:22.331020 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:03:22.333173 systemd-logind[1506]: Session 13 logged out. Waiting for processes to exit. May 8 00:03:22.343739 systemd[1]: Started sshd@13-10.0.0.29:22-10.0.0.1:43292.service - OpenSSH per-connection server daemon (10.0.0.1:43292). May 8 00:03:22.345465 systemd-logind[1506]: Removed session 13. May 8 00:03:22.380707 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 43292 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:03:22.382276 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:03:22.386857 systemd-logind[1506]: New session 14 of user core. May 8 00:03:22.394456 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:03:22.564929 sshd[4200]: Connection closed by 10.0.0.1 port 43292 May 8 00:03:22.565289 sshd-session[4197]: pam_unix(sshd:session): session closed for user core May 8 00:03:22.569141 systemd[1]: sshd@13-10.0.0.29:22-10.0.0.1:43292.service: Deactivated successfully. May 8 00:03:22.571304 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:03:22.572073 systemd-logind[1506]: Session 14 logged out. Waiting for processes to exit. May 8 00:03:22.572981 systemd-logind[1506]: Removed session 14. May 8 00:03:27.587948 systemd[1]: Started sshd@14-10.0.0.29:22-10.0.0.1:37720.service - OpenSSH per-connection server daemon (10.0.0.1:37720). May 8 00:03:27.635340 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 37720 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:03:27.637434 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:03:27.642949 systemd-logind[1506]: New session 15 of user core. May 8 00:03:27.654610 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:03:27.782662 sshd[4218]: Connection closed by 10.0.0.1 port 37720 May 8 00:03:27.783103 sshd-session[4216]: pam_unix(sshd:session): session closed for user core May 8 00:03:27.788095 systemd[1]: sshd@14-10.0.0.29:22-10.0.0.1:37720.service: Deactivated successfully. May 8 00:03:27.790956 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:03:27.791894 systemd-logind[1506]: Session 15 logged out. Waiting for processes to exit. May 8 00:03:27.793043 systemd-logind[1506]: Removed session 15. May 8 00:03:32.798410 systemd[1]: Started sshd@15-10.0.0.29:22-10.0.0.1:37734.service - OpenSSH per-connection server daemon (10.0.0.1:37734). May 8 00:03:32.839045 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 37734 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:03:32.840913 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:03:32.846184 systemd-logind[1506]: New session 16 of user core. May 8 00:03:32.854467 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:03:32.967447 sshd[4234]: Connection closed by 10.0.0.1 port 37734 May 8 00:03:32.968003 sshd-session[4232]: pam_unix(sshd:session): session closed for user core May 8 00:03:32.972115 systemd[1]: sshd@15-10.0.0.29:22-10.0.0.1:37734.service: Deactivated successfully. May 8 00:03:32.974248 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:03:32.975008 systemd-logind[1506]: Session 16 logged out. Waiting for processes to exit. May 8 00:03:32.976001 systemd-logind[1506]: Removed session 16. May 8 00:03:37.985437 systemd[1]: Started sshd@16-10.0.0.29:22-10.0.0.1:41178.service - OpenSSH per-connection server daemon (10.0.0.1:41178). May 8 00:03:38.051864 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 41178 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:03:38.053452 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:03:38.057739 systemd-logind[1506]: New session 17 of user core. May 8 00:03:38.064458 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:03:38.171865 sshd[4252]: Connection closed by 10.0.0.1 port 41178 May 8 00:03:38.172222 sshd-session[4250]: pam_unix(sshd:session): session closed for user core May 8 00:03:38.182965 systemd[1]: sshd@16-10.0.0.29:22-10.0.0.1:41178.service: Deactivated successfully. May 8 00:03:38.184962 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:03:38.186673 systemd-logind[1506]: Session 17 logged out. Waiting for processes to exit. May 8 00:03:38.192596 systemd[1]: Started sshd@17-10.0.0.29:22-10.0.0.1:41192.service - OpenSSH per-connection server daemon (10.0.0.1:41192). May 8 00:03:38.193893 systemd-logind[1506]: Removed session 17. May 8 00:03:38.229918 sshd[4264]: Accepted publickey for core from 10.0.0.1 port 41192 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:03:38.231842 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:03:38.236899 systemd-logind[1506]: New session 18 of user core. May 8 00:03:38.244461 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:03:38.911705 sshd[4267]: Connection closed by 10.0.0.1 port 41192 May 8 00:03:38.912174 sshd-session[4264]: pam_unix(sshd:session): session closed for user core May 8 00:03:38.926625 systemd[1]: sshd@17-10.0.0.29:22-10.0.0.1:41192.service: Deactivated successfully. May 8 00:03:38.928702 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:03:38.930553 systemd-logind[1506]: Session 18 logged out. Waiting for processes to exit. May 8 00:03:38.942913 systemd[1]: Started sshd@18-10.0.0.29:22-10.0.0.1:41204.service - OpenSSH per-connection server daemon (10.0.0.1:41204). May 8 00:03:38.944303 systemd-logind[1506]: Removed session 18. May 8 00:03:38.983594 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 41204 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:03:38.985339 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:03:38.990406 systemd-logind[1506]: New session 19 of user core. May 8 00:03:39.000460 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:03:40.598223 sshd[4280]: Connection closed by 10.0.0.1 port 41204 May 8 00:03:40.600485 sshd-session[4277]: pam_unix(sshd:session): session closed for user core May 8 00:03:40.608721 systemd[1]: sshd@18-10.0.0.29:22-10.0.0.1:41204.service: Deactivated successfully. May 8 00:03:40.611291 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:03:40.614850 systemd-logind[1506]: Session 19 logged out. Waiting for processes to exit. May 8 00:03:40.623629 systemd[1]: Started sshd@19-10.0.0.29:22-10.0.0.1:41206.service - OpenSSH per-connection server daemon (10.0.0.1:41206). May 8 00:03:40.624756 systemd-logind[1506]: Removed session 19. May 8 00:03:40.675510 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 41206 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:03:40.677366 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:03:40.682295 systemd-logind[1506]: New session 20 of user core. May 8 00:03:40.690448 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:03:40.986846 sshd[4302]: Connection closed by 10.0.0.1 port 41206 May 8 00:03:40.987696 sshd-session[4298]: pam_unix(sshd:session): session closed for user core May 8 00:03:40.998983 systemd[1]: sshd@19-10.0.0.29:22-10.0.0.1:41206.service: Deactivated successfully. May 8 00:03:41.001241 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:03:41.002980 systemd-logind[1506]: Session 20 logged out. Waiting for processes to exit. May 8 00:03:41.004464 systemd[1]: Started sshd@20-10.0.0.29:22-10.0.0.1:41222.service - OpenSSH per-connection server daemon (10.0.0.1:41222). May 8 00:03:41.005724 systemd-logind[1506]: Removed session 20. May 8 00:03:41.056701 sshd[4312]: Accepted publickey for core from 10.0.0.1 port 41222 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:03:41.058301 sshd-session[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:03:41.063102 systemd-logind[1506]: New session 21 of user core. May 8 00:03:41.076463 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:03:41.182931 sshd[4315]: Connection closed by 10.0.0.1 port 41222 May 8 00:03:41.183463 sshd-session[4312]: pam_unix(sshd:session): session closed for user core May 8 00:03:41.189187 systemd[1]: sshd@20-10.0.0.29:22-10.0.0.1:41222.service: Deactivated successfully. May 8 00:03:41.191588 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:03:41.192267 systemd-logind[1506]: Session 21 logged out. Waiting for processes to exit. May 8 00:03:41.193207 systemd-logind[1506]: Removed session 21. May 8 00:03:46.200745 systemd[1]: Started sshd@21-10.0.0.29:22-10.0.0.1:41232.service - OpenSSH per-connection server daemon (10.0.0.1:41232). May 8 00:03:46.241050 sshd[4328]: Accepted publickey for core from 10.0.0.1 port 41232 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:03:46.242614 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:03:46.247167 systemd-logind[1506]: New session 22 of user core. May 8 00:03:46.254495 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:03:46.362891 sshd[4330]: Connection closed by 10.0.0.1 port 41232 May 8 00:03:46.363265 sshd-session[4328]: pam_unix(sshd:session): session closed for user core May 8 00:03:46.367352 systemd[1]: sshd@21-10.0.0.29:22-10.0.0.1:41232.service: Deactivated successfully. May 8 00:03:46.369656 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:03:46.370332 systemd-logind[1506]: Session 22 logged out. Waiting for processes to exit. May 8 00:03:46.371407 systemd-logind[1506]: Removed session 22. May 8 00:03:51.375281 systemd[1]: Started sshd@22-10.0.0.29:22-10.0.0.1:32956.service - OpenSSH per-connection server daemon (10.0.0.1:32956). May 8 00:03:51.413838 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 32956 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:03:51.415298 sshd-session[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:03:51.419475 systemd-logind[1506]: New session 23 of user core. May 8 00:03:51.427444 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:03:51.532546 sshd[4351]: Connection closed by 10.0.0.1 port 32956 May 8 00:03:51.532944 sshd-session[4349]: pam_unix(sshd:session): session closed for user core May 8 00:03:51.536990 systemd[1]: sshd@22-10.0.0.29:22-10.0.0.1:32956.service: Deactivated successfully. May 8 00:03:51.539225 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:03:51.540033 systemd-logind[1506]: Session 23 logged out. Waiting for processes to exit. May 8 00:03:51.541010 systemd-logind[1506]: Removed session 23. May 8 00:03:56.546554 systemd[1]: Started sshd@23-10.0.0.29:22-10.0.0.1:32968.service - OpenSSH per-connection server daemon (10.0.0.1:32968). May 8 00:03:56.585588 sshd[4364]: Accepted publickey for core from 10.0.0.1 port 32968 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:03:56.587108 sshd-session[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:03:56.591669 systemd-logind[1506]: New session 24 of user core. May 8 00:03:56.601450 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 00:03:56.706363 sshd[4366]: Connection closed by 10.0.0.1 port 32968 May 8 00:03:56.706772 sshd-session[4364]: pam_unix(sshd:session): session closed for user core May 8 00:03:56.710599 systemd[1]: sshd@23-10.0.0.29:22-10.0.0.1:32968.service: Deactivated successfully. May 8 00:03:56.712829 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:03:56.713542 systemd-logind[1506]: Session 24 logged out. Waiting for processes to exit. May 8 00:03:56.714370 systemd-logind[1506]: Removed session 24. May 8 00:04:01.731572 systemd[1]: Started sshd@24-10.0.0.29:22-10.0.0.1:38168.service - OpenSSH per-connection server daemon (10.0.0.1:38168). May 8 00:04:01.766069 sshd[4379]: Accepted publickey for core from 10.0.0.1 port 38168 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:04:01.768119 sshd-session[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:04:01.773459 systemd-logind[1506]: New session 25 of user core. May 8 00:04:01.780482 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 00:04:01.891571 sshd[4381]: Connection closed by 10.0.0.1 port 38168 May 8 00:04:01.892059 sshd-session[4379]: pam_unix(sshd:session): session closed for user core May 8 00:04:01.905023 systemd[1]: sshd@24-10.0.0.29:22-10.0.0.1:38168.service: Deactivated successfully. May 8 00:04:01.906987 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:04:01.908661 systemd-logind[1506]: Session 25 logged out. Waiting for processes to exit. May 8 00:04:01.914582 systemd[1]: Started sshd@25-10.0.0.29:22-10.0.0.1:38178.service - OpenSSH per-connection server daemon (10.0.0.1:38178). May 8 00:04:01.915657 systemd-logind[1506]: Removed session 25. May 8 00:04:01.949049 sshd[4394]: Accepted publickey for core from 10.0.0.1 port 38178 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:04:01.950678 sshd-session[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:04:01.954979 systemd-logind[1506]: New session 26 of user core. May 8 00:04:01.965472 systemd[1]: Started session-26.scope - Session 26 of User core. May 8 00:04:03.312625 containerd[1519]: time="2025-05-08T00:04:03.312572165Z" level=info msg="StopContainer for \"c08dff5eb4127bf6d05de59340e295d491b325a611ab1193a00a26a035a7c6a9\" with timeout 30 (s)" May 8 00:04:03.313834 containerd[1519]: time="2025-05-08T00:04:03.313476648Z" level=info msg="Stop container \"c08dff5eb4127bf6d05de59340e295d491b325a611ab1193a00a26a035a7c6a9\" with signal terminated" May 8 00:04:03.330565 systemd[1]: cri-containerd-c08dff5eb4127bf6d05de59340e295d491b325a611ab1193a00a26a035a7c6a9.scope: Deactivated successfully. May 8 00:04:03.347896 containerd[1519]: time="2025-05-08T00:04:03.347826163Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:04:03.352472 containerd[1519]: time="2025-05-08T00:04:03.352435383Z" level=info msg="StopContainer for \"826720e7b9939ba78bc2b7b29d4a9b050f7c18a2917411378b1ba822e4f462b8\" with timeout 2 (s)" May 8 00:04:03.352679 containerd[1519]: time="2025-05-08T00:04:03.352657918Z" level=info msg="Stop container \"826720e7b9939ba78bc2b7b29d4a9b050f7c18a2917411378b1ba822e4f462b8\" with signal terminated" May 8 00:04:03.360404 systemd-networkd[1434]: lxc_health: Link DOWN May 8 00:04:03.360415 systemd-networkd[1434]: lxc_health: Lost carrier May 8 00:04:03.363070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c08dff5eb4127bf6d05de59340e295d491b325a611ab1193a00a26a035a7c6a9-rootfs.mount: Deactivated successfully. May 8 00:04:03.372820 containerd[1519]: time="2025-05-08T00:04:03.372732810Z" level=info msg="shim disconnected" id=c08dff5eb4127bf6d05de59340e295d491b325a611ab1193a00a26a035a7c6a9 namespace=k8s.io May 8 00:04:03.372820 containerd[1519]: time="2025-05-08T00:04:03.372787033Z" level=warning msg="cleaning up after shim disconnected" id=c08dff5eb4127bf6d05de59340e295d491b325a611ab1193a00a26a035a7c6a9 namespace=k8s.io May 8 00:04:03.372820 containerd[1519]: time="2025-05-08T00:04:03.372795990Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:04:03.376539 systemd[1]: cri-containerd-826720e7b9939ba78bc2b7b29d4a9b050f7c18a2917411378b1ba822e4f462b8.scope: Deactivated successfully. May 8 00:04:03.376910 systemd[1]: cri-containerd-826720e7b9939ba78bc2b7b29d4a9b050f7c18a2917411378b1ba822e4f462b8.scope: Consumed 7.457s CPU time, 123.2M memory peak, 272K read from disk, 13.3M written to disk. May 8 00:04:03.392389 containerd[1519]: time="2025-05-08T00:04:03.392155159Z" level=info msg="StopContainer for \"c08dff5eb4127bf6d05de59340e295d491b325a611ab1193a00a26a035a7c6a9\" returns successfully" May 8 00:04:03.396231 containerd[1519]: time="2025-05-08T00:04:03.396200395Z" level=info msg="StopPodSandbox for \"e515113bb14297443ba1f8368a0672ccc61949a63de5c6771cb27bb2221b89ac\"" May 8 00:04:03.399678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-826720e7b9939ba78bc2b7b29d4a9b050f7c18a2917411378b1ba822e4f462b8-rootfs.mount: Deactivated successfully. May 8 00:04:03.404921 containerd[1519]: time="2025-05-08T00:04:03.404855904Z" level=info msg="shim disconnected" id=826720e7b9939ba78bc2b7b29d4a9b050f7c18a2917411378b1ba822e4f462b8 namespace=k8s.io May 8 00:04:03.405034 containerd[1519]: time="2025-05-08T00:04:03.404921899Z" level=warning msg="cleaning up after shim disconnected" id=826720e7b9939ba78bc2b7b29d4a9b050f7c18a2917411378b1ba822e4f462b8 namespace=k8s.io May 8 00:04:03.405034 containerd[1519]: time="2025-05-08T00:04:03.404933702Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:04:03.415987 containerd[1519]: time="2025-05-08T00:04:03.396238758Z" level=info msg="Container to stop \"c08dff5eb4127bf6d05de59340e295d491b325a611ab1193a00a26a035a7c6a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:04:03.419484 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e515113bb14297443ba1f8368a0672ccc61949a63de5c6771cb27bb2221b89ac-shm.mount: Deactivated successfully. May 8 00:04:03.423368 containerd[1519]: time="2025-05-08T00:04:03.422038406Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:04:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:04:03.428960 containerd[1519]: time="2025-05-08T00:04:03.428917290Z" level=info msg="StopContainer for \"826720e7b9939ba78bc2b7b29d4a9b050f7c18a2917411378b1ba822e4f462b8\" returns successfully" May 8 00:04:03.429396 systemd[1]: cri-containerd-e515113bb14297443ba1f8368a0672ccc61949a63de5c6771cb27bb2221b89ac.scope: Deactivated successfully. May 8 00:04:03.430716 containerd[1519]: time="2025-05-08T00:04:03.430686019Z" level=info msg="StopPodSandbox for \"df4ea8b7598c8c5d125f1dc94652a72eadf3f926b3337a53e1fb8de7b26f25b8\"" May 8 00:04:03.430893 containerd[1519]: time="2025-05-08T00:04:03.430824743Z" level=info msg="Container to stop \"3e385485feaaf088edb91506fbfd7d4b9a128501a68169c0701022267b9dfca7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:04:03.430996 containerd[1519]: time="2025-05-08T00:04:03.430972816Z" level=info msg="Container to stop \"eba8781ba50391e2070b30977e02b18c9630e98783ea27e40b71cf6e0a84345f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:04:03.430996 containerd[1519]: time="2025-05-08T00:04:03.430992573Z" level=info msg="Container to stop \"826720e7b9939ba78bc2b7b29d4a9b050f7c18a2917411378b1ba822e4f462b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:04:03.431071 containerd[1519]: time="2025-05-08T00:04:03.431003985Z" level=info msg="Container to stop \"b08a1b288e348c482c5ea0502db8078de1868f2979d58157a24095f8f328f34c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:04:03.431071 containerd[1519]: time="2025-05-08T00:04:03.431016959Z" level=info msg="Container to stop \"dcf2b257c0abfb177a7d78dfe376d08bbec19479338c0ece750725f9d2de65a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:04:03.433605 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df4ea8b7598c8c5d125f1dc94652a72eadf3f926b3337a53e1fb8de7b26f25b8-shm.mount: Deactivated successfully. May 8 00:04:03.441451 systemd[1]: cri-containerd-df4ea8b7598c8c5d125f1dc94652a72eadf3f926b3337a53e1fb8de7b26f25b8.scope: Deactivated successfully. May 8 00:04:03.465406 containerd[1519]: time="2025-05-08T00:04:03.465226638Z" level=info msg="shim disconnected" id=e515113bb14297443ba1f8368a0672ccc61949a63de5c6771cb27bb2221b89ac namespace=k8s.io May 8 00:04:03.465406 containerd[1519]: time="2025-05-08T00:04:03.465289428Z" level=warning msg="cleaning up after shim disconnected" id=e515113bb14297443ba1f8368a0672ccc61949a63de5c6771cb27bb2221b89ac namespace=k8s.io May 8 00:04:03.465406 containerd[1519]: time="2025-05-08T00:04:03.465302663Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:04:03.469252 containerd[1519]: time="2025-05-08T00:04:03.468996900Z" level=info msg="shim disconnected" id=df4ea8b7598c8c5d125f1dc94652a72eadf3f926b3337a53e1fb8de7b26f25b8 namespace=k8s.io May 8 00:04:03.469252 containerd[1519]: time="2025-05-08T00:04:03.469055382Z" level=warning msg="cleaning up after shim disconnected" id=df4ea8b7598c8c5d125f1dc94652a72eadf3f926b3337a53e1fb8de7b26f25b8 namespace=k8s.io May 8 00:04:03.469252 containerd[1519]: time="2025-05-08T00:04:03.469065110Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:04:03.483720 containerd[1519]: time="2025-05-08T00:04:03.483669742Z" level=info msg="TearDown network for sandbox \"e515113bb14297443ba1f8368a0672ccc61949a63de5c6771cb27bb2221b89ac\" successfully" May 8 00:04:03.483955 containerd[1519]: time="2025-05-08T00:04:03.483919267Z" level=info msg="StopPodSandbox for \"e515113bb14297443ba1f8368a0672ccc61949a63de5c6771cb27bb2221b89ac\" returns successfully" May 8 00:04:03.488392 containerd[1519]: time="2025-05-08T00:04:03.487882457Z" level=info msg="TearDown network for sandbox \"df4ea8b7598c8c5d125f1dc94652a72eadf3f926b3337a53e1fb8de7b26f25b8\" successfully" May 8 00:04:03.488392 containerd[1519]: time="2025-05-08T00:04:03.487911262Z" level=info msg="StopPodSandbox for \"df4ea8b7598c8c5d125f1dc94652a72eadf3f926b3337a53e1fb8de7b26f25b8\" returns successfully" May 8 00:04:03.552785 kubelet[2702]: I0508 00:04:03.552648 2702 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-host-proc-sys-net\") pod \"abf6f3f8-407b-4995-9062-064d854c8d13\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " May 8 00:04:03.552785 kubelet[2702]: I0508 00:04:03.552701 2702 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-etc-cni-netd\") pod \"abf6f3f8-407b-4995-9062-064d854c8d13\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " May 8 00:04:03.552785 kubelet[2702]: I0508 00:04:03.552740 2702 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "abf6f3f8-407b-4995-9062-064d854c8d13" (UID: "abf6f3f8-407b-4995-9062-064d854c8d13"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:04:03.552785 kubelet[2702]: I0508 00:04:03.552749 2702 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "abf6f3f8-407b-4995-9062-064d854c8d13" (UID: "abf6f3f8-407b-4995-9062-064d854c8d13"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:04:03.653386 kubelet[2702]: I0508 00:04:03.653136 2702 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5wcv\" (UniqueName: \"kubernetes.io/projected/c53eb686-1cba-48e9-818e-de9bcf865851-kube-api-access-l5wcv\") pod \"c53eb686-1cba-48e9-818e-de9bcf865851\" (UID: \"c53eb686-1cba-48e9-818e-de9bcf865851\") " May 8 00:04:03.653386 kubelet[2702]: I0508 00:04:03.653296 2702 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c53eb686-1cba-48e9-818e-de9bcf865851-cilium-config-path\") pod \"c53eb686-1cba-48e9-818e-de9bcf865851\" (UID: \"c53eb686-1cba-48e9-818e-de9bcf865851\") " May 8 00:04:03.653386 kubelet[2702]: I0508 00:04:03.653358 2702 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-lib-modules\") pod \"abf6f3f8-407b-4995-9062-064d854c8d13\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " May 8 00:04:03.653386 kubelet[2702]: I0508 00:04:03.653375 2702 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-cilium-run\") pod \"abf6f3f8-407b-4995-9062-064d854c8d13\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " May 8 00:04:03.653386 kubelet[2702]: I0508 00:04:03.653397 2702 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/abf6f3f8-407b-4995-9062-064d854c8d13-hubble-tls\") pod \"abf6f3f8-407b-4995-9062-064d854c8d13\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " May 8 00:04:03.653704 kubelet[2702]: I0508 00:04:03.653413 2702 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/abf6f3f8-407b-4995-9062-064d854c8d13-clustermesh-secrets\") pod \"abf6f3f8-407b-4995-9062-064d854c8d13\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " May 8 00:04:03.653704 kubelet[2702]: I0508 00:04:03.653427 2702 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-cilium-cgroup\") pod \"abf6f3f8-407b-4995-9062-064d854c8d13\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " May 8 00:04:03.653704 kubelet[2702]: I0508 00:04:03.653442 2702 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67mxr\" (UniqueName: \"kubernetes.io/projected/abf6f3f8-407b-4995-9062-064d854c8d13-kube-api-access-67mxr\") pod \"abf6f3f8-407b-4995-9062-064d854c8d13\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " May 8 00:04:03.653704 kubelet[2702]: I0508 00:04:03.653455 2702 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-cni-path\") pod \"abf6f3f8-407b-4995-9062-064d854c8d13\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " May 8 00:04:03.653704 kubelet[2702]: I0508 00:04:03.653471 2702 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/abf6f3f8-407b-4995-9062-064d854c8d13-cilium-config-path\") pod \"abf6f3f8-407b-4995-9062-064d854c8d13\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " May 8 00:04:03.653704 kubelet[2702]: I0508 00:04:03.653498 2702 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-xtables-lock\") pod \"abf6f3f8-407b-4995-9062-064d854c8d13\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " May 8 00:04:03.653927 kubelet[2702]: I0508 00:04:03.653511 2702 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-hostproc\") pod \"abf6f3f8-407b-4995-9062-064d854c8d13\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " May 8 00:04:03.653927 kubelet[2702]: I0508 00:04:03.653523 2702 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-bpf-maps\") pod \"abf6f3f8-407b-4995-9062-064d854c8d13\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " May 8 00:04:03.653927 kubelet[2702]: I0508 00:04:03.653536 2702 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-host-proc-sys-kernel\") pod \"abf6f3f8-407b-4995-9062-064d854c8d13\" (UID: \"abf6f3f8-407b-4995-9062-064d854c8d13\") " May 8 00:04:03.653927 kubelet[2702]: I0508 00:04:03.653565 2702 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 8 00:04:03.653927 kubelet[2702]: I0508 00:04:03.653574 2702 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 8 00:04:03.653927 kubelet[2702]: I0508 00:04:03.653600 2702 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "abf6f3f8-407b-4995-9062-064d854c8d13" (UID: "abf6f3f8-407b-4995-9062-064d854c8d13"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:04:03.654133 kubelet[2702]: I0508 00:04:03.653626 2702 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "abf6f3f8-407b-4995-9062-064d854c8d13" (UID: "abf6f3f8-407b-4995-9062-064d854c8d13"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:04:03.654133 kubelet[2702]: I0508 00:04:03.653640 2702 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "abf6f3f8-407b-4995-9062-064d854c8d13" (UID: "abf6f3f8-407b-4995-9062-064d854c8d13"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:04:03.654133 kubelet[2702]: I0508 00:04:03.653925 2702 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-cni-path" (OuterVolumeSpecName: "cni-path") pod "abf6f3f8-407b-4995-9062-064d854c8d13" (UID: "abf6f3f8-407b-4995-9062-064d854c8d13"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:04:03.657851 kubelet[2702]: I0508 00:04:03.657816 2702 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abf6f3f8-407b-4995-9062-064d854c8d13-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "abf6f3f8-407b-4995-9062-064d854c8d13" (UID: "abf6f3f8-407b-4995-9062-064d854c8d13"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:04:03.658077 kubelet[2702]: I0508 00:04:03.658028 2702 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "abf6f3f8-407b-4995-9062-064d854c8d13" (UID: "abf6f3f8-407b-4995-9062-064d854c8d13"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:04:03.658924 kubelet[2702]: I0508 00:04:03.658416 2702 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abf6f3f8-407b-4995-9062-064d854c8d13-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "abf6f3f8-407b-4995-9062-064d854c8d13" (UID: "abf6f3f8-407b-4995-9062-064d854c8d13"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:04:03.659590 kubelet[2702]: I0508 00:04:03.659554 2702 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c53eb686-1cba-48e9-818e-de9bcf865851-kube-api-access-l5wcv" (OuterVolumeSpecName: "kube-api-access-l5wcv") pod "c53eb686-1cba-48e9-818e-de9bcf865851" (UID: "c53eb686-1cba-48e9-818e-de9bcf865851"). InnerVolumeSpecName "kube-api-access-l5wcv". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:04:03.659655 kubelet[2702]: I0508 00:04:03.659595 2702 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-hostproc" (OuterVolumeSpecName: "hostproc") pod "abf6f3f8-407b-4995-9062-064d854c8d13" (UID: "abf6f3f8-407b-4995-9062-064d854c8d13"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:04:03.659655 kubelet[2702]: I0508 00:04:03.659613 2702 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "abf6f3f8-407b-4995-9062-064d854c8d13" (UID: "abf6f3f8-407b-4995-9062-064d854c8d13"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:04:03.659655 kubelet[2702]: I0508 00:04:03.659628 2702 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "abf6f3f8-407b-4995-9062-064d854c8d13" (UID: "abf6f3f8-407b-4995-9062-064d854c8d13"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:04:03.660631 kubelet[2702]: I0508 00:04:03.660556 2702 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c53eb686-1cba-48e9-818e-de9bcf865851-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c53eb686-1cba-48e9-818e-de9bcf865851" (UID: "c53eb686-1cba-48e9-818e-de9bcf865851"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:04:03.660700 kubelet[2702]: I0508 00:04:03.660641 2702 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abf6f3f8-407b-4995-9062-064d854c8d13-kube-api-access-67mxr" (OuterVolumeSpecName: "kube-api-access-67mxr") pod "abf6f3f8-407b-4995-9062-064d854c8d13" (UID: "abf6f3f8-407b-4995-9062-064d854c8d13"). InnerVolumeSpecName "kube-api-access-67mxr". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:04:03.661959 kubelet[2702]: I0508 00:04:03.661937 2702 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abf6f3f8-407b-4995-9062-064d854c8d13-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "abf6f3f8-407b-4995-9062-064d854c8d13" (UID: "abf6f3f8-407b-4995-9062-064d854c8d13"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:04:03.753688 kubelet[2702]: I0508 00:04:03.753642 2702 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-hostproc\") on node \"localhost\" DevicePath \"\"" May 8 00:04:03.753688 kubelet[2702]: I0508 00:04:03.753669 2702 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 8 00:04:03.753688 kubelet[2702]: I0508 00:04:03.753692 2702 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 8 00:04:03.753894 kubelet[2702]: I0508 00:04:03.753704 2702 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-l5wcv\" (UniqueName: \"kubernetes.io/projected/c53eb686-1cba-48e9-818e-de9bcf865851-kube-api-access-l5wcv\") on node \"localhost\" DevicePath \"\"" May 8 00:04:03.753894 kubelet[2702]: I0508 00:04:03.753715 2702 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c53eb686-1cba-48e9-818e-de9bcf865851-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:04:03.753894 kubelet[2702]: I0508 00:04:03.753723 2702 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:04:03.753894 kubelet[2702]: I0508 00:04:03.753731 2702 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-cilium-run\") on node \"localhost\" DevicePath \"\"" May 8 00:04:03.753894 kubelet[2702]: I0508 00:04:03.753740 2702 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/abf6f3f8-407b-4995-9062-064d854c8d13-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 8 00:04:03.753894 kubelet[2702]: I0508 00:04:03.753748 2702 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/abf6f3f8-407b-4995-9062-064d854c8d13-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:04:03.753894 kubelet[2702]: I0508 00:04:03.753756 2702 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 8 00:04:03.753894 kubelet[2702]: I0508 00:04:03.753765 2702 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-67mxr\" (UniqueName: \"kubernetes.io/projected/abf6f3f8-407b-4995-9062-064d854c8d13-kube-api-access-67mxr\") on node \"localhost\" DevicePath \"\"" May 8 00:04:03.754076 kubelet[2702]: I0508 00:04:03.753774 2702 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-cni-path\") on node \"localhost\" DevicePath \"\"" May 8 00:04:03.754076 kubelet[2702]: I0508 00:04:03.753782 2702 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/abf6f3f8-407b-4995-9062-064d854c8d13-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:04:03.754076 kubelet[2702]: I0508 00:04:03.753790 2702 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abf6f3f8-407b-4995-9062-064d854c8d13-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:04:04.323452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df4ea8b7598c8c5d125f1dc94652a72eadf3f926b3337a53e1fb8de7b26f25b8-rootfs.mount: Deactivated successfully. May 8 00:04:04.323586 systemd[1]: var-lib-kubelet-pods-abf6f3f8\x2d407b\x2d4995\x2d9062\x2d064d854c8d13-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d67mxr.mount: Deactivated successfully. May 8 00:04:04.323672 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e515113bb14297443ba1f8368a0672ccc61949a63de5c6771cb27bb2221b89ac-rootfs.mount: Deactivated successfully. May 8 00:04:04.323751 systemd[1]: var-lib-kubelet-pods-c53eb686\x2d1cba\x2d48e9\x2d818e\x2dde9bcf865851-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl5wcv.mount: Deactivated successfully. May 8 00:04:04.323853 systemd[1]: var-lib-kubelet-pods-abf6f3f8\x2d407b\x2d4995\x2d9062\x2d064d854c8d13-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:04:04.323957 systemd[1]: var-lib-kubelet-pods-abf6f3f8\x2d407b\x2d4995\x2d9062\x2d064d854c8d13-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:04:04.472853 kubelet[2702]: I0508 00:04:04.472816 2702 scope.go:117] "RemoveContainer" containerID="826720e7b9939ba78bc2b7b29d4a9b050f7c18a2917411378b1ba822e4f462b8" May 8 00:04:04.480471 containerd[1519]: time="2025-05-08T00:04:04.480419976Z" level=info msg="RemoveContainer for \"826720e7b9939ba78bc2b7b29d4a9b050f7c18a2917411378b1ba822e4f462b8\"" May 8 00:04:04.481037 systemd[1]: Removed slice kubepods-burstable-podabf6f3f8_407b_4995_9062_064d854c8d13.slice - libcontainer container kubepods-burstable-podabf6f3f8_407b_4995_9062_064d854c8d13.slice. May 8 00:04:04.481224 systemd[1]: kubepods-burstable-podabf6f3f8_407b_4995_9062_064d854c8d13.slice: Consumed 7.579s CPU time, 123.5M memory peak, 276K read from disk, 13.3M written to disk. May 8 00:04:04.483635 systemd[1]: Removed slice kubepods-besteffort-podc53eb686_1cba_48e9_818e_de9bcf865851.slice - libcontainer container kubepods-besteffort-podc53eb686_1cba_48e9_818e_de9bcf865851.slice. May 8 00:04:04.485890 containerd[1519]: time="2025-05-08T00:04:04.485843764Z" level=info msg="RemoveContainer for \"826720e7b9939ba78bc2b7b29d4a9b050f7c18a2917411378b1ba822e4f462b8\" returns successfully" May 8 00:04:04.486169 kubelet[2702]: I0508 00:04:04.486136 2702 scope.go:117] "RemoveContainer" containerID="3e385485feaaf088edb91506fbfd7d4b9a128501a68169c0701022267b9dfca7" May 8 00:04:04.487522 containerd[1519]: time="2025-05-08T00:04:04.487471884Z" level=info msg="RemoveContainer for \"3e385485feaaf088edb91506fbfd7d4b9a128501a68169c0701022267b9dfca7\"" May 8 00:04:04.493278 containerd[1519]: time="2025-05-08T00:04:04.493233937Z" level=info msg="RemoveContainer for \"3e385485feaaf088edb91506fbfd7d4b9a128501a68169c0701022267b9dfca7\" returns successfully" May 8 00:04:04.495074 kubelet[2702]: I0508 00:04:04.495038 2702 scope.go:117] "RemoveContainer" containerID="eba8781ba50391e2070b30977e02b18c9630e98783ea27e40b71cf6e0a84345f" May 8 00:04:04.498141 containerd[1519]: time="2025-05-08T00:04:04.498095405Z" level=info msg="RemoveContainer for \"eba8781ba50391e2070b30977e02b18c9630e98783ea27e40b71cf6e0a84345f\"" May 8 00:04:04.502223 containerd[1519]: time="2025-05-08T00:04:04.502179311Z" level=info msg="RemoveContainer for \"eba8781ba50391e2070b30977e02b18c9630e98783ea27e40b71cf6e0a84345f\" returns successfully" May 8 00:04:04.502542 kubelet[2702]: I0508 00:04:04.502407 2702 scope.go:117] "RemoveContainer" containerID="dcf2b257c0abfb177a7d78dfe376d08bbec19479338c0ece750725f9d2de65a1" May 8 00:04:04.503781 containerd[1519]: time="2025-05-08T00:04:04.503744352Z" level=info msg="RemoveContainer for \"dcf2b257c0abfb177a7d78dfe376d08bbec19479338c0ece750725f9d2de65a1\"" May 8 00:04:04.508000 containerd[1519]: time="2025-05-08T00:04:04.507963486Z" level=info msg="RemoveContainer for \"dcf2b257c0abfb177a7d78dfe376d08bbec19479338c0ece750725f9d2de65a1\" returns successfully" May 8 00:04:04.508150 kubelet[2702]: I0508 00:04:04.508126 2702 scope.go:117] "RemoveContainer" containerID="b08a1b288e348c482c5ea0502db8078de1868f2979d58157a24095f8f328f34c" May 8 00:04:04.509191 containerd[1519]: time="2025-05-08T00:04:04.509150747Z" level=info msg="RemoveContainer for \"b08a1b288e348c482c5ea0502db8078de1868f2979d58157a24095f8f328f34c\"" May 8 00:04:04.512293 containerd[1519]: time="2025-05-08T00:04:04.512260058Z" level=info msg="RemoveContainer for \"b08a1b288e348c482c5ea0502db8078de1868f2979d58157a24095f8f328f34c\" returns successfully" May 8 00:04:04.512493 kubelet[2702]: I0508 00:04:04.512442 2702 scope.go:117] "RemoveContainer" containerID="c08dff5eb4127bf6d05de59340e295d491b325a611ab1193a00a26a035a7c6a9" May 8 00:04:04.513338 containerd[1519]: time="2025-05-08T00:04:04.513294849Z" level=info msg="RemoveContainer for \"c08dff5eb4127bf6d05de59340e295d491b325a611ab1193a00a26a035a7c6a9\"" May 8 00:04:04.517427 containerd[1519]: time="2025-05-08T00:04:04.517401869Z" level=info msg="RemoveContainer for \"c08dff5eb4127bf6d05de59340e295d491b325a611ab1193a00a26a035a7c6a9\" returns successfully" May 8 00:04:04.744365 kubelet[2702]: I0508 00:04:04.744297 2702 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abf6f3f8-407b-4995-9062-064d854c8d13" path="/var/lib/kubelet/pods/abf6f3f8-407b-4995-9062-064d854c8d13/volumes" May 8 00:04:04.745387 kubelet[2702]: I0508 00:04:04.745361 2702 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c53eb686-1cba-48e9-818e-de9bcf865851" path="/var/lib/kubelet/pods/c53eb686-1cba-48e9-818e-de9bcf865851/volumes" May 8 00:04:04.810675 kubelet[2702]: E0508 00:04:04.810615 2702 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:04:05.270203 sshd[4397]: Connection closed by 10.0.0.1 port 38178 May 8 00:04:05.270771 sshd-session[4394]: pam_unix(sshd:session): session closed for user core May 8 00:04:05.290463 systemd[1]: sshd@25-10.0.0.29:22-10.0.0.1:38178.service: Deactivated successfully. May 8 00:04:05.292802 systemd[1]: session-26.scope: Deactivated successfully. May 8 00:04:05.294618 systemd-logind[1506]: Session 26 logged out. Waiting for processes to exit. May 8 00:04:05.304602 systemd[1]: Started sshd@26-10.0.0.29:22-10.0.0.1:38194.service - OpenSSH per-connection server daemon (10.0.0.1:38194). May 8 00:04:05.305355 systemd-logind[1506]: Removed session 26. May 8 00:04:05.341024 sshd[4561]: Accepted publickey for core from 10.0.0.1 port 38194 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:04:05.342796 sshd-session[4561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:04:05.347233 systemd-logind[1506]: New session 27 of user core. May 8 00:04:05.361458 systemd[1]: Started session-27.scope - Session 27 of User core. May 8 00:04:06.202161 sshd[4564]: Connection closed by 10.0.0.1 port 38194 May 8 00:04:06.203156 sshd-session[4561]: pam_unix(sshd:session): session closed for user core May 8 00:04:06.218229 systemd[1]: sshd@26-10.0.0.29:22-10.0.0.1:38194.service: Deactivated successfully. May 8 00:04:06.223336 systemd[1]: session-27.scope: Deactivated successfully. May 8 00:04:06.227529 systemd-logind[1506]: Session 27 logged out. Waiting for processes to exit. May 8 00:04:06.229560 systemd-logind[1506]: Removed session 27. May 8 00:04:06.236746 kubelet[2702]: I0508 00:04:06.236660 2702 topology_manager.go:215] "Topology Admit Handler" podUID="83590be5-b4e1-4b69-b4d1-bb4fe7b4679d" podNamespace="kube-system" podName="cilium-gznfg" May 8 00:04:06.236746 kubelet[2702]: E0508 00:04:06.236724 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="abf6f3f8-407b-4995-9062-064d854c8d13" containerName="apply-sysctl-overwrites" May 8 00:04:06.236746 kubelet[2702]: E0508 00:04:06.236734 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="abf6f3f8-407b-4995-9062-064d854c8d13" containerName="mount-bpf-fs" May 8 00:04:06.236746 kubelet[2702]: E0508 00:04:06.236740 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c53eb686-1cba-48e9-818e-de9bcf865851" containerName="cilium-operator" May 8 00:04:06.236746 kubelet[2702]: E0508 00:04:06.236748 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="abf6f3f8-407b-4995-9062-064d854c8d13" containerName="mount-cgroup" May 8 00:04:06.236746 kubelet[2702]: E0508 00:04:06.236754 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="abf6f3f8-407b-4995-9062-064d854c8d13" containerName="clean-cilium-state" May 8 00:04:06.236746 kubelet[2702]: E0508 00:04:06.236761 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="abf6f3f8-407b-4995-9062-064d854c8d13" containerName="cilium-agent" May 8 00:04:06.241781 kubelet[2702]: I0508 00:04:06.236782 2702 memory_manager.go:354] "RemoveStaleState removing state" podUID="abf6f3f8-407b-4995-9062-064d854c8d13" containerName="cilium-agent" May 8 00:04:06.241781 kubelet[2702]: I0508 00:04:06.236790 2702 memory_manager.go:354] "RemoveStaleState removing state" podUID="c53eb686-1cba-48e9-818e-de9bcf865851" containerName="cilium-operator" May 8 00:04:06.240428 systemd[1]: Started sshd@27-10.0.0.29:22-10.0.0.1:38198.service - OpenSSH per-connection server daemon (10.0.0.1:38198). May 8 00:04:06.253452 systemd[1]: Created slice kubepods-burstable-pod83590be5_b4e1_4b69_b4d1_bb4fe7b4679d.slice - libcontainer container kubepods-burstable-pod83590be5_b4e1_4b69_b4d1_bb4fe7b4679d.slice. May 8 00:04:06.280344 sshd[4576]: Accepted publickey for core from 10.0.0.1 port 38198 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:04:06.281994 sshd-session[4576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:04:06.286499 systemd-logind[1506]: New session 28 of user core. May 8 00:04:06.296443 systemd[1]: Started session-28.scope - Session 28 of User core. May 8 00:04:06.346397 sshd[4578]: Connection closed by 10.0.0.1 port 38198 May 8 00:04:06.346985 sshd-session[4576]: pam_unix(sshd:session): session closed for user core May 8 00:04:06.359383 systemd[1]: sshd@27-10.0.0.29:22-10.0.0.1:38198.service: Deactivated successfully. May 8 00:04:06.361566 systemd[1]: session-28.scope: Deactivated successfully. May 8 00:04:06.363274 systemd-logind[1506]: Session 28 logged out. Waiting for processes to exit. May 8 00:04:06.368237 kubelet[2702]: I0508 00:04:06.368188 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83590be5-b4e1-4b69-b4d1-bb4fe7b4679d-cilium-run\") pod \"cilium-gznfg\" (UID: \"83590be5-b4e1-4b69-b4d1-bb4fe7b4679d\") " pod="kube-system/cilium-gznfg" May 8 00:04:06.368237 kubelet[2702]: I0508 00:04:06.368232 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83590be5-b4e1-4b69-b4d1-bb4fe7b4679d-bpf-maps\") pod \"cilium-gznfg\" (UID: \"83590be5-b4e1-4b69-b4d1-bb4fe7b4679d\") " pod="kube-system/cilium-gznfg" May 8 00:04:06.368408 kubelet[2702]: I0508 00:04:06.368256 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83590be5-b4e1-4b69-b4d1-bb4fe7b4679d-hostproc\") pod \"cilium-gznfg\" (UID: \"83590be5-b4e1-4b69-b4d1-bb4fe7b4679d\") " pod="kube-system/cilium-gznfg" May 8 00:04:06.368408 kubelet[2702]: I0508 00:04:06.368271 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83590be5-b4e1-4b69-b4d1-bb4fe7b4679d-cilium-cgroup\") pod \"cilium-gznfg\" (UID: \"83590be5-b4e1-4b69-b4d1-bb4fe7b4679d\") " pod="kube-system/cilium-gznfg" May 8 00:04:06.368408 kubelet[2702]: I0508 00:04:06.368294 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/83590be5-b4e1-4b69-b4d1-bb4fe7b4679d-cilium-ipsec-secrets\") pod \"cilium-gznfg\" (UID: \"83590be5-b4e1-4b69-b4d1-bb4fe7b4679d\") " pod="kube-system/cilium-gznfg" May 8 00:04:06.368408 kubelet[2702]: I0508 00:04:06.368310 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83590be5-b4e1-4b69-b4d1-bb4fe7b4679d-host-proc-sys-kernel\") pod \"cilium-gznfg\" (UID: \"83590be5-b4e1-4b69-b4d1-bb4fe7b4679d\") " pod="kube-system/cilium-gznfg" May 8 00:04:06.368408 kubelet[2702]: I0508 00:04:06.368339 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83590be5-b4e1-4b69-b4d1-bb4fe7b4679d-xtables-lock\") pod \"cilium-gznfg\" (UID: \"83590be5-b4e1-4b69-b4d1-bb4fe7b4679d\") " pod="kube-system/cilium-gznfg" May 8 00:04:06.368408 kubelet[2702]: I0508 00:04:06.368355 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83590be5-b4e1-4b69-b4d1-bb4fe7b4679d-cni-path\") pod \"cilium-gznfg\" (UID: \"83590be5-b4e1-4b69-b4d1-bb4fe7b4679d\") " pod="kube-system/cilium-gznfg" May 8 00:04:06.368555 kubelet[2702]: I0508 00:04:06.368367 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83590be5-b4e1-4b69-b4d1-bb4fe7b4679d-etc-cni-netd\") pod \"cilium-gznfg\" (UID: \"83590be5-b4e1-4b69-b4d1-bb4fe7b4679d\") " pod="kube-system/cilium-gznfg" May 8 00:04:06.368555 kubelet[2702]: I0508 00:04:06.368380 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83590be5-b4e1-4b69-b4d1-bb4fe7b4679d-lib-modules\") pod \"cilium-gznfg\" (UID: \"83590be5-b4e1-4b69-b4d1-bb4fe7b4679d\") " pod="kube-system/cilium-gznfg" May 8 00:04:06.368555 kubelet[2702]: I0508 00:04:06.368392 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83590be5-b4e1-4b69-b4d1-bb4fe7b4679d-cilium-config-path\") pod \"cilium-gznfg\" (UID: \"83590be5-b4e1-4b69-b4d1-bb4fe7b4679d\") " pod="kube-system/cilium-gznfg" May 8 00:04:06.368555 kubelet[2702]: I0508 00:04:06.368510 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83590be5-b4e1-4b69-b4d1-bb4fe7b4679d-clustermesh-secrets\") pod \"cilium-gznfg\" (UID: \"83590be5-b4e1-4b69-b4d1-bb4fe7b4679d\") " pod="kube-system/cilium-gznfg" May 8 00:04:06.368555 kubelet[2702]: I0508 00:04:06.368528 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83590be5-b4e1-4b69-b4d1-bb4fe7b4679d-hubble-tls\") pod \"cilium-gznfg\" (UID: \"83590be5-b4e1-4b69-b4d1-bb4fe7b4679d\") " pod="kube-system/cilium-gznfg" May 8 00:04:06.368669 kubelet[2702]: I0508 00:04:06.368545 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-256ll\" (UniqueName: \"kubernetes.io/projected/83590be5-b4e1-4b69-b4d1-bb4fe7b4679d-kube-api-access-256ll\") pod \"cilium-gznfg\" (UID: \"83590be5-b4e1-4b69-b4d1-bb4fe7b4679d\") " pod="kube-system/cilium-gznfg" May 8 00:04:06.368669 kubelet[2702]: I0508 00:04:06.368593 2702 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83590be5-b4e1-4b69-b4d1-bb4fe7b4679d-host-proc-sys-net\") pod \"cilium-gznfg\" (UID: \"83590be5-b4e1-4b69-b4d1-bb4fe7b4679d\") " pod="kube-system/cilium-gznfg" May 8 00:04:06.371595 systemd[1]: Started sshd@28-10.0.0.29:22-10.0.0.1:38208.service - OpenSSH per-connection server daemon (10.0.0.1:38208). May 8 00:04:06.372652 systemd-logind[1506]: Removed session 28. May 8 00:04:06.407209 sshd[4586]: Accepted publickey for core from 10.0.0.1 port 38208 ssh2: RSA SHA256:eeIhQYFcPnu6mMScJoclBSH4fha0xWFb8+CJYn23RAs May 8 00:04:06.408819 sshd-session[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:04:06.413716 systemd-logind[1506]: New session 29 of user core. May 8 00:04:06.427467 systemd[1]: Started session-29.scope - Session 29 of User core. May 8 00:04:06.559914 containerd[1519]: time="2025-05-08T00:04:06.559766750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gznfg,Uid:83590be5-b4e1-4b69-b4d1-bb4fe7b4679d,Namespace:kube-system,Attempt:0,}" May 8 00:04:06.593871 containerd[1519]: time="2025-05-08T00:04:06.593757353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:04:06.593871 containerd[1519]: time="2025-05-08T00:04:06.593820774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:04:06.593871 containerd[1519]: time="2025-05-08T00:04:06.593833468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:04:06.594063 containerd[1519]: time="2025-05-08T00:04:06.593915924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:04:06.613476 systemd[1]: Started cri-containerd-b42d44011db6fcb25612f482e20d36c37070cd8e9bb7fb4cfea481454dcbb956.scope - libcontainer container b42d44011db6fcb25612f482e20d36c37070cd8e9bb7fb4cfea481454dcbb956. May 8 00:04:06.638524 containerd[1519]: time="2025-05-08T00:04:06.638459759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gznfg,Uid:83590be5-b4e1-4b69-b4d1-bb4fe7b4679d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b42d44011db6fcb25612f482e20d36c37070cd8e9bb7fb4cfea481454dcbb956\"" May 8 00:04:06.642073 containerd[1519]: time="2025-05-08T00:04:06.642019052Z" level=info msg="CreateContainer within sandbox \"b42d44011db6fcb25612f482e20d36c37070cd8e9bb7fb4cfea481454dcbb956\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:04:06.657654 containerd[1519]: time="2025-05-08T00:04:06.657589782Z" level=info msg="CreateContainer within sandbox \"b42d44011db6fcb25612f482e20d36c37070cd8e9bb7fb4cfea481454dcbb956\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a2af8896f218f41d44206f7b2e546ee17add2b15d76370adf63c6072e4f84916\"" May 8 00:04:06.658203 containerd[1519]: time="2025-05-08T00:04:06.658145250Z" level=info msg="StartContainer for \"a2af8896f218f41d44206f7b2e546ee17add2b15d76370adf63c6072e4f84916\"" May 8 00:04:06.686469 systemd[1]: Started cri-containerd-a2af8896f218f41d44206f7b2e546ee17add2b15d76370adf63c6072e4f84916.scope - libcontainer container a2af8896f218f41d44206f7b2e546ee17add2b15d76370adf63c6072e4f84916. May 8 00:04:06.715113 containerd[1519]: time="2025-05-08T00:04:06.715064013Z" level=info msg="StartContainer for \"a2af8896f218f41d44206f7b2e546ee17add2b15d76370adf63c6072e4f84916\" returns successfully" May 8 00:04:06.726140 systemd[1]: cri-containerd-a2af8896f218f41d44206f7b2e546ee17add2b15d76370adf63c6072e4f84916.scope: Deactivated successfully. May 8 00:04:06.762624 containerd[1519]: time="2025-05-08T00:04:06.762550236Z" level=info msg="shim disconnected" id=a2af8896f218f41d44206f7b2e546ee17add2b15d76370adf63c6072e4f84916 namespace=k8s.io May 8 00:04:06.762624 containerd[1519]: time="2025-05-08T00:04:06.762617484Z" level=warning msg="cleaning up after shim disconnected" id=a2af8896f218f41d44206f7b2e546ee17add2b15d76370adf63c6072e4f84916 namespace=k8s.io May 8 00:04:06.762624 containerd[1519]: time="2025-05-08T00:04:06.762628436Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:04:07.112370 kubelet[2702]: I0508 00:04:07.112293 2702 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T00:04:07Z","lastTransitionTime":"2025-05-08T00:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 00:04:07.487204 containerd[1519]: time="2025-05-08T00:04:07.487150533Z" level=info msg="CreateContainer within sandbox \"b42d44011db6fcb25612f482e20d36c37070cd8e9bb7fb4cfea481454dcbb956\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:04:07.502198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount825660433.mount: Deactivated successfully. May 8 00:04:07.503609 containerd[1519]: time="2025-05-08T00:04:07.503551172Z" level=info msg="CreateContainer within sandbox \"b42d44011db6fcb25612f482e20d36c37070cd8e9bb7fb4cfea481454dcbb956\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cbbfe85670f5b43dae33097d15e4dcc721f40f489d67eb4b3988f9cd8634ac5e\"" May 8 00:04:07.504152 containerd[1519]: time="2025-05-08T00:04:07.504110947Z" level=info msg="StartContainer for \"cbbfe85670f5b43dae33097d15e4dcc721f40f489d67eb4b3988f9cd8634ac5e\"" May 8 00:04:07.537463 systemd[1]: Started cri-containerd-cbbfe85670f5b43dae33097d15e4dcc721f40f489d67eb4b3988f9cd8634ac5e.scope - libcontainer container cbbfe85670f5b43dae33097d15e4dcc721f40f489d67eb4b3988f9cd8634ac5e. May 8 00:04:07.566305 containerd[1519]: time="2025-05-08T00:04:07.566249955Z" level=info msg="StartContainer for \"cbbfe85670f5b43dae33097d15e4dcc721f40f489d67eb4b3988f9cd8634ac5e\" returns successfully" May 8 00:04:07.573386 systemd[1]: cri-containerd-cbbfe85670f5b43dae33097d15e4dcc721f40f489d67eb4b3988f9cd8634ac5e.scope: Deactivated successfully. May 8 00:04:07.600260 containerd[1519]: time="2025-05-08T00:04:07.600174470Z" level=info msg="shim disconnected" id=cbbfe85670f5b43dae33097d15e4dcc721f40f489d67eb4b3988f9cd8634ac5e namespace=k8s.io May 8 00:04:07.600260 containerd[1519]: time="2025-05-08T00:04:07.600243952Z" level=warning msg="cleaning up after shim disconnected" id=cbbfe85670f5b43dae33097d15e4dcc721f40f489d67eb4b3988f9cd8634ac5e namespace=k8s.io May 8 00:04:07.600260 containerd[1519]: time="2025-05-08T00:04:07.600252669Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:04:08.475601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbbfe85670f5b43dae33097d15e4dcc721f40f489d67eb4b3988f9cd8634ac5e-rootfs.mount: Deactivated successfully. May 8 00:04:08.490537 containerd[1519]: time="2025-05-08T00:04:08.490477762Z" level=info msg="CreateContainer within sandbox \"b42d44011db6fcb25612f482e20d36c37070cd8e9bb7fb4cfea481454dcbb956\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:04:08.513343 containerd[1519]: time="2025-05-08T00:04:08.513250488Z" level=info msg="CreateContainer within sandbox \"b42d44011db6fcb25612f482e20d36c37070cd8e9bb7fb4cfea481454dcbb956\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3020f6065235edbed265148d4a38f0b4afe6787f63a2c8e41b5207f44f6c5a24\"" May 8 00:04:08.513943 containerd[1519]: time="2025-05-08T00:04:08.513903200Z" level=info msg="StartContainer for \"3020f6065235edbed265148d4a38f0b4afe6787f63a2c8e41b5207f44f6c5a24\"" May 8 00:04:08.549530 systemd[1]: Started cri-containerd-3020f6065235edbed265148d4a38f0b4afe6787f63a2c8e41b5207f44f6c5a24.scope - libcontainer container 3020f6065235edbed265148d4a38f0b4afe6787f63a2c8e41b5207f44f6c5a24. May 8 00:04:08.602310 systemd[1]: cri-containerd-3020f6065235edbed265148d4a38f0b4afe6787f63a2c8e41b5207f44f6c5a24.scope: Deactivated successfully. May 8 00:04:08.690183 containerd[1519]: time="2025-05-08T00:04:08.690123627Z" level=info msg="StartContainer for \"3020f6065235edbed265148d4a38f0b4afe6787f63a2c8e41b5207f44f6c5a24\" returns successfully" May 8 00:04:08.718424 containerd[1519]: time="2025-05-08T00:04:08.718316501Z" level=info msg="shim disconnected" id=3020f6065235edbed265148d4a38f0b4afe6787f63a2c8e41b5207f44f6c5a24 namespace=k8s.io May 8 00:04:08.718424 containerd[1519]: time="2025-05-08T00:04:08.718402475Z" level=warning msg="cleaning up after shim disconnected" id=3020f6065235edbed265148d4a38f0b4afe6787f63a2c8e41b5207f44f6c5a24 namespace=k8s.io May 8 00:04:08.718424 containerd[1519]: time="2025-05-08T00:04:08.718412974Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:04:09.475367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3020f6065235edbed265148d4a38f0b4afe6787f63a2c8e41b5207f44f6c5a24-rootfs.mount: Deactivated successfully. May 8 00:04:09.494503 containerd[1519]: time="2025-05-08T00:04:09.494439868Z" level=info msg="CreateContainer within sandbox \"b42d44011db6fcb25612f482e20d36c37070cd8e9bb7fb4cfea481454dcbb956\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:04:09.512751 containerd[1519]: time="2025-05-08T00:04:09.512691025Z" level=info msg="CreateContainer within sandbox \"b42d44011db6fcb25612f482e20d36c37070cd8e9bb7fb4cfea481454dcbb956\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"913321f91ec8091282900da2e98c6d3984330653aaae57f7242dbda5345f98d7\"" May 8 00:04:09.513295 containerd[1519]: time="2025-05-08T00:04:09.513244718Z" level=info msg="StartContainer for \"913321f91ec8091282900da2e98c6d3984330653aaae57f7242dbda5345f98d7\"" May 8 00:04:09.543470 systemd[1]: Started cri-containerd-913321f91ec8091282900da2e98c6d3984330653aaae57f7242dbda5345f98d7.scope - libcontainer container 913321f91ec8091282900da2e98c6d3984330653aaae57f7242dbda5345f98d7. May 8 00:04:09.567763 systemd[1]: cri-containerd-913321f91ec8091282900da2e98c6d3984330653aaae57f7242dbda5345f98d7.scope: Deactivated successfully. May 8 00:04:09.569408 containerd[1519]: time="2025-05-08T00:04:09.569368093Z" level=info msg="StartContainer for \"913321f91ec8091282900da2e98c6d3984330653aaae57f7242dbda5345f98d7\" returns successfully" May 8 00:04:09.595203 containerd[1519]: time="2025-05-08T00:04:09.595119179Z" level=info msg="shim disconnected" id=913321f91ec8091282900da2e98c6d3984330653aaae57f7242dbda5345f98d7 namespace=k8s.io May 8 00:04:09.595203 containerd[1519]: time="2025-05-08T00:04:09.595176949Z" level=warning msg="cleaning up after shim disconnected" id=913321f91ec8091282900da2e98c6d3984330653aaae57f7242dbda5345f98d7 namespace=k8s.io May 8 00:04:09.595203 containerd[1519]: time="2025-05-08T00:04:09.595185315Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:04:09.811765 kubelet[2702]: E0508 00:04:09.811601 2702 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:04:10.475374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-913321f91ec8091282900da2e98c6d3984330653aaae57f7242dbda5345f98d7-rootfs.mount: Deactivated successfully. May 8 00:04:10.499040 containerd[1519]: time="2025-05-08T00:04:10.498926828Z" level=info msg="CreateContainer within sandbox \"b42d44011db6fcb25612f482e20d36c37070cd8e9bb7fb4cfea481454dcbb956\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:04:10.527092 containerd[1519]: time="2025-05-08T00:04:10.527034808Z" level=info msg="CreateContainer within sandbox \"b42d44011db6fcb25612f482e20d36c37070cd8e9bb7fb4cfea481454dcbb956\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"549856ea17c8c9ae4afe79872ae25f26cbd446b55c20e829a71e068fadf15af3\"" May 8 00:04:10.527654 containerd[1519]: time="2025-05-08T00:04:10.527606414Z" level=info msg="StartContainer for \"549856ea17c8c9ae4afe79872ae25f26cbd446b55c20e829a71e068fadf15af3\"" May 8 00:04:10.554517 systemd[1]: Started cri-containerd-549856ea17c8c9ae4afe79872ae25f26cbd446b55c20e829a71e068fadf15af3.scope - libcontainer container 549856ea17c8c9ae4afe79872ae25f26cbd446b55c20e829a71e068fadf15af3. May 8 00:04:10.652103 containerd[1519]: time="2025-05-08T00:04:10.652044902Z" level=info msg="StartContainer for \"549856ea17c8c9ae4afe79872ae25f26cbd446b55c20e829a71e068fadf15af3\" returns successfully" May 8 00:04:11.079367 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 8 00:04:11.514885 kubelet[2702]: I0508 00:04:11.514785 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gznfg" podStartSLOduration=5.514759349 podStartE2EDuration="5.514759349s" podCreationTimestamp="2025-05-08 00:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:04:11.514676612 +0000 UTC m=+96.866269559" watchObservedRunningTime="2025-05-08 00:04:11.514759349 +0000 UTC m=+96.866352287" May 8 00:04:15.350361 systemd-networkd[1434]: lxc_health: Link UP May 8 00:04:15.351779 systemd-networkd[1434]: lxc_health: Gained carrier May 8 00:04:16.429121 systemd-networkd[1434]: lxc_health: Gained IPv6LL May 8 00:04:17.686348 kubelet[2702]: E0508 00:04:17.685737 2702 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:36416->127.0.0.1:40883: read tcp 127.0.0.1:36416->127.0.0.1:40883: read: connection reset by peer May 8 00:04:21.892931 sshd[4590]: Connection closed by 10.0.0.1 port 38208 May 8 00:04:21.893500 sshd-session[4586]: pam_unix(sshd:session): session closed for user core May 8 00:04:21.898335 systemd[1]: sshd@28-10.0.0.29:22-10.0.0.1:38208.service: Deactivated successfully. May 8 00:04:21.900555 systemd[1]: session-29.scope: Deactivated successfully. May 8 00:04:21.901260 systemd-logind[1506]: Session 29 logged out. Waiting for processes to exit. May 8 00:04:21.902200 systemd-logind[1506]: Removed session 29.