Sep 13 10:19:55.827021 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sat Sep 13 08:30:13 -00 2025 Sep 13 10:19:55.827044 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=29913b080383fb09f846b4e8f22e4ebe48c8b17d0cc2b8191530bb5bda42eda0 Sep 13 10:19:55.827056 kernel: BIOS-provided physical RAM map: Sep 13 10:19:55.827062 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 10:19:55.827069 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 13 10:19:55.827075 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 13 10:19:55.827083 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 13 10:19:55.827090 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 13 10:19:55.827099 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 13 10:19:55.827108 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 13 10:19:55.827115 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 13 10:19:55.827121 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 13 10:19:55.827128 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 13 10:19:55.827135 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 13 10:19:55.827143 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 13 10:19:55.827152 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 13 10:19:55.827162 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 13 10:19:55.827169 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 13 10:19:55.827176 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 13 10:19:55.827183 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 13 10:19:55.827190 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 13 10:19:55.827197 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 13 10:19:55.827204 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 13 10:19:55.827211 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 10:19:55.827218 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 13 10:19:55.827227 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 10:19:55.827234 kernel: NX (Execute Disable) protection: active Sep 13 10:19:55.827241 kernel: APIC: Static calls initialized Sep 13 10:19:55.827249 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 13 10:19:55.827256 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 13 10:19:55.827263 kernel: extended physical RAM map: Sep 13 10:19:55.827270 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 10:19:55.827277 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 13 10:19:55.827285 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 13 10:19:55.827292 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 13 10:19:55.827299 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 13 10:19:55.827308 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 13 10:19:55.827315 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 13 10:19:55.827323 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 13 10:19:55.827330 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 13 10:19:55.827340 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 13 10:19:55.827348 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 13 10:19:55.827357 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 13 10:19:55.827365 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 13 10:19:55.827372 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 13 10:19:55.827380 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 13 10:19:55.827387 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 13 10:19:55.827395 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 13 10:19:55.827402 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 13 10:19:55.827409 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 13 10:19:55.827417 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 13 10:19:55.827424 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 13 10:19:55.827434 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 13 10:19:55.827441 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 13 10:19:55.827448 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 13 10:19:55.827456 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 10:19:55.827463 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 13 10:19:55.827470 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 10:19:55.827480 kernel: efi: EFI v2.7 by EDK II Sep 13 10:19:55.827488 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 13 10:19:55.827495 kernel: random: crng init done Sep 13 10:19:55.827505 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 13 10:19:55.827512 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 13 10:19:55.827536 kernel: secureboot: Secure boot disabled Sep 13 10:19:55.827544 kernel: SMBIOS 2.8 present. Sep 13 10:19:55.827551 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 13 10:19:55.827559 kernel: DMI: Memory slots populated: 1/1 Sep 13 10:19:55.827566 kernel: Hypervisor detected: KVM Sep 13 10:19:55.827573 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 10:19:55.827581 kernel: kvm-clock: using sched offset of 5288480473 cycles Sep 13 10:19:55.827588 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 10:19:55.827596 kernel: tsc: Detected 2794.748 MHz processor Sep 13 10:19:55.827604 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 10:19:55.827612 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 10:19:55.827621 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 13 10:19:55.827629 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 13 10:19:55.827636 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 10:19:55.827644 kernel: Using GB pages for direct mapping Sep 13 10:19:55.827652 kernel: ACPI: Early table checksum verification disabled Sep 13 10:19:55.827659 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 13 10:19:55.827667 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 13 10:19:55.827675 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:19:55.827683 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:19:55.827693 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 13 10:19:55.827700 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:19:55.827708 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:19:55.827716 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:19:55.827723 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:19:55.827731 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 13 10:19:55.827738 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 13 10:19:55.827746 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 13 10:19:55.827754 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 13 10:19:55.827763 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 13 10:19:55.827771 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 13 10:19:55.827778 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 13 10:19:55.827786 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 13 10:19:55.827793 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 13 10:19:55.827801 kernel: No NUMA configuration found Sep 13 10:19:55.827808 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 13 10:19:55.827816 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 13 10:19:55.827823 kernel: Zone ranges: Sep 13 10:19:55.827833 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 10:19:55.827841 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 13 10:19:55.827848 kernel: Normal empty Sep 13 10:19:55.827856 kernel: Device empty Sep 13 10:19:55.827863 kernel: Movable zone start for each node Sep 13 10:19:55.827871 kernel: Early memory node ranges Sep 13 10:19:55.827878 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 10:19:55.827886 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 13 10:19:55.827896 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 13 10:19:55.827905 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 13 10:19:55.827913 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 13 10:19:55.827927 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 13 10:19:55.827935 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 13 10:19:55.827942 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 13 10:19:55.827950 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 13 10:19:55.827957 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 10:19:55.827968 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 10:19:55.827984 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 13 10:19:55.827992 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 10:19:55.827999 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 13 10:19:55.828007 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 13 10:19:55.828015 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 13 10:19:55.828025 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 13 10:19:55.828033 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 13 10:19:55.828041 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 10:19:55.828049 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 10:19:55.828057 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 10:19:55.828066 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 10:19:55.828074 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 10:19:55.828082 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 10:19:55.828090 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 10:19:55.828098 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 10:19:55.828106 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 10:19:55.828114 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 10:19:55.828122 kernel: TSC deadline timer available Sep 13 10:19:55.828132 kernel: CPU topo: Max. logical packages: 1 Sep 13 10:19:55.828140 kernel: CPU topo: Max. logical dies: 1 Sep 13 10:19:55.828148 kernel: CPU topo: Max. dies per package: 1 Sep 13 10:19:55.828156 kernel: CPU topo: Max. threads per core: 1 Sep 13 10:19:55.828163 kernel: CPU topo: Num. cores per package: 4 Sep 13 10:19:55.828171 kernel: CPU topo: Num. threads per package: 4 Sep 13 10:19:55.828179 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 13 10:19:55.828187 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 10:19:55.828195 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 10:19:55.828203 kernel: kvm-guest: setup PV sched yield Sep 13 10:19:55.828213 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 13 10:19:55.828221 kernel: Booting paravirtualized kernel on KVM Sep 13 10:19:55.828229 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 10:19:55.828237 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 13 10:19:55.828245 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 13 10:19:55.828253 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 13 10:19:55.828260 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 10:19:55.828268 kernel: kvm-guest: PV spinlocks enabled Sep 13 10:19:55.828276 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 10:19:55.828287 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=29913b080383fb09f846b4e8f22e4ebe48c8b17d0cc2b8191530bb5bda42eda0 Sep 13 10:19:55.828298 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 10:19:55.828306 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 10:19:55.828314 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 10:19:55.828322 kernel: Fallback order for Node 0: 0 Sep 13 10:19:55.828329 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 13 10:19:55.828337 kernel: Policy zone: DMA32 Sep 13 10:19:55.828345 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 10:19:55.828355 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 10:19:55.828363 kernel: ftrace: allocating 40125 entries in 157 pages Sep 13 10:19:55.828371 kernel: ftrace: allocated 157 pages with 5 groups Sep 13 10:19:55.828379 kernel: Dynamic Preempt: voluntary Sep 13 10:19:55.828387 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 10:19:55.828395 kernel: rcu: RCU event tracing is enabled. Sep 13 10:19:55.828403 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 10:19:55.828412 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 10:19:55.828420 kernel: Rude variant of Tasks RCU enabled. Sep 13 10:19:55.828430 kernel: Tracing variant of Tasks RCU enabled. Sep 13 10:19:55.828438 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 10:19:55.828448 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 10:19:55.828456 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 10:19:55.828464 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 10:19:55.828472 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 10:19:55.828480 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 10:19:55.828488 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 10:19:55.828496 kernel: Console: colour dummy device 80x25 Sep 13 10:19:55.828506 kernel: printk: legacy console [ttyS0] enabled Sep 13 10:19:55.828525 kernel: ACPI: Core revision 20240827 Sep 13 10:19:55.828533 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 10:19:55.828553 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 10:19:55.828561 kernel: x2apic enabled Sep 13 10:19:55.828569 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 10:19:55.828577 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 13 10:19:55.828585 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 13 10:19:55.828593 kernel: kvm-guest: setup PV IPIs Sep 13 10:19:55.828603 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 10:19:55.828611 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 13 10:19:55.828619 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 13 10:19:55.828627 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 10:19:55.828635 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 10:19:55.828643 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 10:19:55.828651 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 10:19:55.828659 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 10:19:55.828667 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 10:19:55.828677 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 10:19:55.828685 kernel: active return thunk: retbleed_return_thunk Sep 13 10:19:55.828693 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 10:19:55.828704 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 10:19:55.828712 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 10:19:55.828720 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 13 10:19:55.828729 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 13 10:19:55.828737 kernel: active return thunk: srso_return_thunk Sep 13 10:19:55.828747 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 13 10:19:55.828755 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 10:19:55.828764 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 10:19:55.828771 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 10:19:55.828779 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 10:19:55.828787 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 10:19:55.828795 kernel: Freeing SMP alternatives memory: 32K Sep 13 10:19:55.828803 kernel: pid_max: default: 32768 minimum: 301 Sep 13 10:19:55.828811 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 13 10:19:55.828821 kernel: landlock: Up and running. Sep 13 10:19:55.828829 kernel: SELinux: Initializing. Sep 13 10:19:55.828837 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 10:19:55.828845 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 10:19:55.828854 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 10:19:55.828862 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 10:19:55.828869 kernel: ... version: 0 Sep 13 10:19:55.828877 kernel: ... bit width: 48 Sep 13 10:19:55.828885 kernel: ... generic registers: 6 Sep 13 10:19:55.828895 kernel: ... value mask: 0000ffffffffffff Sep 13 10:19:55.828903 kernel: ... max period: 00007fffffffffff Sep 13 10:19:55.828911 kernel: ... fixed-purpose events: 0 Sep 13 10:19:55.828919 kernel: ... event mask: 000000000000003f Sep 13 10:19:55.828935 kernel: signal: max sigframe size: 1776 Sep 13 10:19:55.828942 kernel: rcu: Hierarchical SRCU implementation. Sep 13 10:19:55.828951 kernel: rcu: Max phase no-delay instances is 400. Sep 13 10:19:55.828962 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 13 10:19:55.828970 kernel: smp: Bringing up secondary CPUs ... Sep 13 10:19:55.828980 kernel: smpboot: x86: Booting SMP configuration: Sep 13 10:19:55.828987 kernel: .... node #0, CPUs: #1 #2 #3 Sep 13 10:19:55.828995 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 10:19:55.829003 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 13 10:19:55.829012 kernel: Memory: 2422676K/2565800K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54088K init, 2876K bss, 137196K reserved, 0K cma-reserved) Sep 13 10:19:55.829020 kernel: devtmpfs: initialized Sep 13 10:19:55.829027 kernel: x86/mm: Memory block size: 128MB Sep 13 10:19:55.829035 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 13 10:19:55.829043 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 13 10:19:55.829053 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 13 10:19:55.829062 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 13 10:19:55.829070 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 13 10:19:55.829078 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 13 10:19:55.829086 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 10:19:55.829094 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 10:19:55.829102 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 10:19:55.829110 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 10:19:55.829118 kernel: audit: initializing netlink subsys (disabled) Sep 13 10:19:55.829128 kernel: audit: type=2000 audit(1757758792.821:1): state=initialized audit_enabled=0 res=1 Sep 13 10:19:55.829135 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 10:19:55.829143 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 10:19:55.829151 kernel: cpuidle: using governor menu Sep 13 10:19:55.829159 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 10:19:55.829167 kernel: dca service started, version 1.12.1 Sep 13 10:19:55.829175 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 13 10:19:55.829183 kernel: PCI: Using configuration type 1 for base access Sep 13 10:19:55.829191 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 10:19:55.829201 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 10:19:55.829209 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 10:19:55.829216 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 10:19:55.829224 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 10:19:55.829232 kernel: ACPI: Added _OSI(Module Device) Sep 13 10:19:55.829240 kernel: ACPI: Added _OSI(Processor Device) Sep 13 10:19:55.829248 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 10:19:55.829256 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 10:19:55.829264 kernel: ACPI: Interpreter enabled Sep 13 10:19:55.829273 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 10:19:55.829281 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 10:19:55.829289 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 10:19:55.829297 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 10:19:55.829305 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 10:19:55.829313 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 10:19:55.829612 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 10:19:55.829748 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 10:19:55.829875 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 10:19:55.829886 kernel: PCI host bridge to bus 0000:00 Sep 13 10:19:55.830060 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 10:19:55.830176 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 10:19:55.830287 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 10:19:55.830398 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 13 10:19:55.830508 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 13 10:19:55.830644 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 13 10:19:55.830756 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 10:19:55.830908 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 13 10:19:55.831060 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 13 10:19:55.831184 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 13 10:19:55.831305 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 13 10:19:55.831430 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 13 10:19:55.831611 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 10:19:55.831804 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 13 10:19:55.831939 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 13 10:19:55.832063 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 13 10:19:55.832185 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 13 10:19:55.832324 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 13 10:19:55.832454 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 13 10:19:55.832594 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 13 10:19:55.832719 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 13 10:19:55.832858 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 13 10:19:55.833002 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 13 10:19:55.833156 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 13 10:19:55.833333 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 13 10:19:55.833544 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 13 10:19:55.833699 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 13 10:19:55.833831 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 10:19:55.833977 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 13 10:19:55.834099 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 13 10:19:55.834220 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 13 10:19:55.834383 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 13 10:19:55.834506 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 13 10:19:55.834534 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 10:19:55.834543 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 10:19:55.834551 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 10:19:55.834559 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 10:19:55.834567 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 10:19:55.834574 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 10:19:55.834586 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 10:19:55.834594 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 10:19:55.834601 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 10:19:55.834609 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 10:19:55.834617 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 10:19:55.834625 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 10:19:55.834633 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 10:19:55.834640 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 10:19:55.834648 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 10:19:55.834658 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 10:19:55.834666 kernel: iommu: Default domain type: Translated Sep 13 10:19:55.834674 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 10:19:55.834682 kernel: efivars: Registered efivars operations Sep 13 10:19:55.834690 kernel: PCI: Using ACPI for IRQ routing Sep 13 10:19:55.834698 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 10:19:55.834705 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 13 10:19:55.834713 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 13 10:19:55.834721 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 13 10:19:55.834729 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 13 10:19:55.834739 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 13 10:19:55.834746 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 13 10:19:55.834754 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 13 10:19:55.834762 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 13 10:19:55.834886 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 10:19:55.835020 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 10:19:55.835142 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 10:19:55.835156 kernel: vgaarb: loaded Sep 13 10:19:55.835164 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 10:19:55.835172 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 10:19:55.835180 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 10:19:55.835187 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 10:19:55.835195 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 10:19:55.835204 kernel: pnp: PnP ACPI init Sep 13 10:19:55.835372 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 13 10:19:55.835390 kernel: pnp: PnP ACPI: found 6 devices Sep 13 10:19:55.835399 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 10:19:55.835407 kernel: NET: Registered PF_INET protocol family Sep 13 10:19:55.835415 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 10:19:55.835424 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 10:19:55.835432 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 10:19:55.835440 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 10:19:55.835449 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 10:19:55.835457 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 10:19:55.835467 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 10:19:55.835475 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 10:19:55.835483 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 10:19:55.835492 kernel: NET: Registered PF_XDP protocol family Sep 13 10:19:55.835633 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 13 10:19:55.835758 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 13 10:19:55.835871 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 10:19:55.835993 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 10:19:55.836109 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 10:19:55.836220 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 13 10:19:55.836340 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 13 10:19:55.836452 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 13 10:19:55.836464 kernel: PCI: CLS 0 bytes, default 64 Sep 13 10:19:55.836472 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 13 10:19:55.836480 kernel: Initialise system trusted keyrings Sep 13 10:19:55.836491 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 10:19:55.836500 kernel: Key type asymmetric registered Sep 13 10:19:55.836507 kernel: Asymmetric key parser 'x509' registered Sep 13 10:19:55.836528 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 10:19:55.836548 kernel: io scheduler mq-deadline registered Sep 13 10:19:55.836557 kernel: io scheduler kyber registered Sep 13 10:19:55.836565 kernel: io scheduler bfq registered Sep 13 10:19:55.836577 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 10:19:55.836587 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 10:19:55.836596 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 10:19:55.836604 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 10:19:55.836612 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 10:19:55.836620 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 10:19:55.836628 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 10:19:55.836636 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 10:19:55.836644 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 10:19:55.836653 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 10:19:55.836796 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 10:19:55.836914 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 10:19:55.837041 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T10:19:55 UTC (1757758795) Sep 13 10:19:55.837156 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 13 10:19:55.837166 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 13 10:19:55.837174 kernel: efifb: probing for efifb Sep 13 10:19:55.837183 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 13 10:19:55.837194 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 13 10:19:55.837202 kernel: efifb: scrolling: redraw Sep 13 10:19:55.837210 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 10:19:55.837218 kernel: Console: switching to colour frame buffer device 160x50 Sep 13 10:19:55.837226 kernel: fb0: EFI VGA frame buffer device Sep 13 10:19:55.837234 kernel: pstore: Using crash dump compression: deflate Sep 13 10:19:55.837242 kernel: pstore: Registered efi_pstore as persistent store backend Sep 13 10:19:55.837250 kernel: NET: Registered PF_INET6 protocol family Sep 13 10:19:55.837259 kernel: Segment Routing with IPv6 Sep 13 10:19:55.837269 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 10:19:55.837277 kernel: NET: Registered PF_PACKET protocol family Sep 13 10:19:55.837285 kernel: Key type dns_resolver registered Sep 13 10:19:55.837293 kernel: IPI shorthand broadcast: enabled Sep 13 10:19:55.837301 kernel: sched_clock: Marking stable (4013003566, 157681712)->(4191758022, -21072744) Sep 13 10:19:55.837309 kernel: registered taskstats version 1 Sep 13 10:19:55.837317 kernel: Loading compiled-in X.509 certificates Sep 13 10:19:55.837325 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: cbb54677ad1c578839cdade5ab8500bbdb72e350' Sep 13 10:19:55.837333 kernel: Demotion targets for Node 0: null Sep 13 10:19:55.837341 kernel: Key type .fscrypt registered Sep 13 10:19:55.837351 kernel: Key type fscrypt-provisioning registered Sep 13 10:19:55.837359 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 10:19:55.837367 kernel: ima: Allocated hash algorithm: sha1 Sep 13 10:19:55.837375 kernel: ima: No architecture policies found Sep 13 10:19:55.837383 kernel: clk: Disabling unused clocks Sep 13 10:19:55.837404 kernel: Warning: unable to open an initial console. Sep 13 10:19:55.837413 kernel: Freeing unused kernel image (initmem) memory: 54088K Sep 13 10:19:55.837421 kernel: Write protecting the kernel read-only data: 24576k Sep 13 10:19:55.837450 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 13 10:19:55.837459 kernel: Run /init as init process Sep 13 10:19:55.837468 kernel: with arguments: Sep 13 10:19:55.837475 kernel: /init Sep 13 10:19:55.837483 kernel: with environment: Sep 13 10:19:55.837491 kernel: HOME=/ Sep 13 10:19:55.837499 kernel: TERM=linux Sep 13 10:19:55.837507 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 10:19:55.837538 systemd[1]: Successfully made /usr/ read-only. Sep 13 10:19:55.837554 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 10:19:55.837569 systemd[1]: Detected virtualization kvm. Sep 13 10:19:55.837577 systemd[1]: Detected architecture x86-64. Sep 13 10:19:55.837586 systemd[1]: Running in initrd. Sep 13 10:19:55.837594 systemd[1]: No hostname configured, using default hostname. Sep 13 10:19:55.837603 systemd[1]: Hostname set to . Sep 13 10:19:55.837611 systemd[1]: Initializing machine ID from VM UUID. Sep 13 10:19:55.837623 systemd[1]: Queued start job for default target initrd.target. Sep 13 10:19:55.837631 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 10:19:55.837640 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 10:19:55.837650 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 10:19:55.837658 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 10:19:55.837667 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 10:19:55.837677 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 10:19:55.837688 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 10:19:55.837697 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 10:19:55.837706 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 10:19:55.837714 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 10:19:55.837723 systemd[1]: Reached target paths.target - Path Units. Sep 13 10:19:55.837731 systemd[1]: Reached target slices.target - Slice Units. Sep 13 10:19:55.837740 systemd[1]: Reached target swap.target - Swaps. Sep 13 10:19:55.837748 systemd[1]: Reached target timers.target - Timer Units. Sep 13 10:19:55.837757 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 10:19:55.837768 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 10:19:55.837776 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 10:19:55.837785 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 13 10:19:55.837793 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 10:19:55.837802 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 10:19:55.837811 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 10:19:55.837819 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 10:19:55.837828 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 10:19:55.837839 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 10:19:55.837847 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 10:19:55.837856 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 13 10:19:55.837865 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 10:19:55.837873 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 10:19:55.837882 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 10:19:55.837890 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:19:55.837899 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 10:19:55.837910 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 10:19:55.837919 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 10:19:55.837966 systemd-journald[220]: Collecting audit messages is disabled. Sep 13 10:19:55.837988 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 10:19:55.837997 systemd-journald[220]: Journal started Sep 13 10:19:55.838018 systemd-journald[220]: Runtime Journal (/run/log/journal/6fc6f5bd7117407db1572d9d59f6a1bf) is 6M, max 48.4M, 42.4M free. Sep 13 10:19:55.833505 systemd-modules-load[221]: Inserted module 'overlay' Sep 13 10:19:55.839683 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 10:19:55.843862 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 10:19:55.846587 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:19:55.856740 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 10:19:55.863463 systemd-tmpfiles[236]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 13 10:19:55.863641 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 10:19:55.869538 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 10:19:55.875551 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 10:19:55.878258 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 13 10:19:55.878949 kernel: Bridge firewalling registered Sep 13 10:19:55.879577 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 10:19:55.881663 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 10:19:55.887324 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 10:19:55.892436 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 10:19:55.901463 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 10:19:55.905818 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 10:19:55.910496 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:19:55.918203 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 10:19:55.933982 dracut-cmdline[258]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=29913b080383fb09f846b4e8f22e4ebe48c8b17d0cc2b8191530bb5bda42eda0 Sep 13 10:19:56.006823 systemd-resolved[261]: Positive Trust Anchors: Sep 13 10:19:56.006852 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 10:19:56.006892 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 10:19:56.010548 systemd-resolved[261]: Defaulting to hostname 'linux'. Sep 13 10:19:56.012031 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 10:19:56.016025 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 10:19:56.063592 kernel: SCSI subsystem initialized Sep 13 10:19:56.073567 kernel: Loading iSCSI transport class v2.0-870. Sep 13 10:19:56.085572 kernel: iscsi: registered transport (tcp) Sep 13 10:19:56.110549 kernel: iscsi: registered transport (qla4xxx) Sep 13 10:19:56.110579 kernel: QLogic iSCSI HBA Driver Sep 13 10:19:56.133318 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 10:19:56.154965 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 10:19:56.158461 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 10:19:56.213324 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 10:19:56.215749 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 10:19:56.273546 kernel: raid6: avx2x4 gen() 30471 MB/s Sep 13 10:19:56.290542 kernel: raid6: avx2x2 gen() 30399 MB/s Sep 13 10:19:56.307562 kernel: raid6: avx2x1 gen() 25041 MB/s Sep 13 10:19:56.307582 kernel: raid6: using algorithm avx2x4 gen() 30471 MB/s Sep 13 10:19:56.325563 kernel: raid6: .... xor() 8307 MB/s, rmw enabled Sep 13 10:19:56.325596 kernel: raid6: using avx2x2 recovery algorithm Sep 13 10:19:56.345540 kernel: xor: automatically using best checksumming function avx Sep 13 10:19:56.509563 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 10:19:56.517777 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 10:19:56.520850 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 10:19:56.548448 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 13 10:19:56.554264 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 10:19:56.555874 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 10:19:56.587648 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Sep 13 10:19:56.615652 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 10:19:56.617559 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 10:19:56.706056 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 10:19:56.710129 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 10:19:56.757561 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 13 10:19:56.761551 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 10:19:56.768953 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 10:19:56.768976 kernel: GPT:9289727 != 19775487 Sep 13 10:19:56.768991 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 10:19:56.769004 kernel: GPT:9289727 != 19775487 Sep 13 10:19:56.769017 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 10:19:56.769038 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 10:19:56.773560 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 13 10:19:56.776535 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 10:19:56.776564 kernel: libata version 3.00 loaded. Sep 13 10:19:56.786538 kernel: AES CTR mode by8 optimization enabled Sep 13 10:19:56.788811 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 10:19:56.789137 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 10:19:56.790235 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 13 10:19:56.790446 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 13 10:19:56.791545 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 10:19:56.794560 kernel: scsi host0: ahci Sep 13 10:19:56.795553 kernel: scsi host1: ahci Sep 13 10:19:56.799846 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 10:19:56.802668 kernel: scsi host2: ahci Sep 13 10:19:56.800064 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:19:56.824983 kernel: scsi host3: ahci Sep 13 10:19:56.825276 kernel: scsi host4: ahci Sep 13 10:19:56.825479 kernel: scsi host5: ahci Sep 13 10:19:56.825700 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 13 10:19:56.825718 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 13 10:19:56.825741 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 13 10:19:56.825756 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 13 10:19:56.825770 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 13 10:19:56.825785 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 13 10:19:56.826360 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:19:56.830632 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:19:56.834325 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 13 10:19:56.851217 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 10:19:56.871336 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 10:19:56.887753 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 10:19:56.890295 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 10:19:56.909095 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 10:19:56.912789 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 10:19:56.914958 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 10:19:56.915917 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:19:56.918103 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:19:56.937155 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:19:56.939753 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 13 10:19:56.946098 disk-uuid[636]: Primary Header is updated. Sep 13 10:19:56.946098 disk-uuid[636]: Secondary Entries is updated. Sep 13 10:19:56.946098 disk-uuid[636]: Secondary Header is updated. Sep 13 10:19:56.950552 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 10:19:56.955539 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 10:19:56.967995 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:19:57.118679 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 10:19:57.118732 kernel: ata3.00: LPM support broken, forcing max_power Sep 13 10:19:57.118744 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 10:19:57.118754 kernel: ata3.00: applying bridge limits Sep 13 10:19:57.118765 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 10:19:57.120540 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 10:19:57.120558 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 10:19:57.121560 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 10:19:57.122565 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 10:19:57.122585 kernel: ata3.00: LPM support broken, forcing max_power Sep 13 10:19:57.123856 kernel: ata3.00: configured for UDMA/100 Sep 13 10:19:57.124548 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 10:19:57.182069 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 10:19:57.182297 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 10:19:57.194865 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 10:19:57.508490 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 10:19:57.511123 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 10:19:57.513552 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 10:19:57.513967 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 10:19:57.515159 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 10:19:57.538761 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 10:19:57.955551 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 10:19:57.955998 disk-uuid[637]: The operation has completed successfully. Sep 13 10:19:57.990636 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 10:19:57.990799 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 10:19:58.024124 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 10:19:58.049975 sh[670]: Success Sep 13 10:19:58.067890 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 10:19:58.067926 kernel: device-mapper: uevent: version 1.0.3 Sep 13 10:19:58.068975 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 13 10:19:58.078539 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 13 10:19:58.105989 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 10:19:58.109712 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 10:19:58.157191 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 10:19:58.163591 kernel: BTRFS: device fsid fbf3e737-db97-4ff7-a1f5-c4d4b7390663 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (682) Sep 13 10:19:58.165612 kernel: BTRFS info (device dm-0): first mount of filesystem fbf3e737-db97-4ff7-a1f5-c4d4b7390663 Sep 13 10:19:58.165635 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:19:58.170154 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 10:19:58.170184 kernel: BTRFS info (device dm-0): enabling free space tree Sep 13 10:19:58.171484 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 10:19:58.173606 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 13 10:19:58.175906 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 10:19:58.177960 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 10:19:58.180428 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 10:19:58.217542 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (715) Sep 13 10:19:58.219552 kernel: BTRFS info (device vda6): first mount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:19:58.219612 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:19:58.222551 kernel: BTRFS info (device vda6): turning on async discard Sep 13 10:19:58.222575 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 10:19:58.228827 kernel: BTRFS info (device vda6): last unmount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:19:58.229924 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 10:19:58.233216 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 10:19:58.423632 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 10:19:58.429810 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 10:19:58.458224 ignition[760]: Ignition 2.22.0 Sep 13 10:19:58.458238 ignition[760]: Stage: fetch-offline Sep 13 10:19:58.458298 ignition[760]: no configs at "/usr/lib/ignition/base.d" Sep 13 10:19:58.458309 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:19:58.458433 ignition[760]: parsed url from cmdline: "" Sep 13 10:19:58.458437 ignition[760]: no config URL provided Sep 13 10:19:58.458442 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 10:19:58.458452 ignition[760]: no config at "/usr/lib/ignition/user.ign" Sep 13 10:19:58.458476 ignition[760]: op(1): [started] loading QEMU firmware config module Sep 13 10:19:58.458482 ignition[760]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 10:19:58.468855 ignition[760]: op(1): [finished] loading QEMU firmware config module Sep 13 10:19:58.533685 systemd-networkd[859]: lo: Link UP Sep 13 10:19:58.533697 systemd-networkd[859]: lo: Gained carrier Sep 13 10:19:58.535336 systemd-networkd[859]: Enumeration completed Sep 13 10:19:58.535631 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 10:19:58.536695 systemd-networkd[859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:19:58.536701 systemd-networkd[859]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 10:19:58.537199 systemd-networkd[859]: eth0: Link UP Sep 13 10:19:58.538445 systemd-networkd[859]: eth0: Gained carrier Sep 13 10:19:58.538455 systemd-networkd[859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:19:58.538477 systemd[1]: Reached target network.target - Network. Sep 13 10:19:58.559431 ignition[760]: parsing config with SHA512: 7c543107aaedb1649305c1659d8c665d95c940ba08d04c5e7768d50afb239ba885c6f9819aaa6d40257e6204c2d80b67a68ee5a15016b87c41a09e8e914a6304 Sep 13 10:19:58.560596 systemd-networkd[859]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 10:19:58.565056 unknown[760]: fetched base config from "system" Sep 13 10:19:58.565069 unknown[760]: fetched user config from "qemu" Sep 13 10:19:58.565670 ignition[760]: fetch-offline: fetch-offline passed Sep 13 10:19:58.565777 ignition[760]: Ignition finished successfully Sep 13 10:19:58.569067 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 10:19:58.571434 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 10:19:58.573443 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 10:19:58.622794 ignition[866]: Ignition 2.22.0 Sep 13 10:19:58.622807 ignition[866]: Stage: kargs Sep 13 10:19:58.622964 ignition[866]: no configs at "/usr/lib/ignition/base.d" Sep 13 10:19:58.622976 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:19:58.623869 ignition[866]: kargs: kargs passed Sep 13 10:19:58.623918 ignition[866]: Ignition finished successfully Sep 13 10:19:58.632807 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 10:19:58.636222 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 10:19:58.708732 ignition[873]: Ignition 2.22.0 Sep 13 10:19:58.708746 ignition[873]: Stage: disks Sep 13 10:19:58.708909 ignition[873]: no configs at "/usr/lib/ignition/base.d" Sep 13 10:19:58.708931 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:19:58.709700 ignition[873]: disks: disks passed Sep 13 10:19:58.709746 ignition[873]: Ignition finished successfully Sep 13 10:19:58.717184 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 10:19:58.717709 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 10:19:58.718039 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 10:19:58.718399 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 10:19:58.718912 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 10:19:58.719236 systemd[1]: Reached target basic.target - Basic System. Sep 13 10:19:58.721382 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 10:19:58.772298 systemd-fsck[883]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 13 10:19:58.783924 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 10:19:58.788775 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 10:19:58.913565 kernel: EXT4-fs (vda9): mounted filesystem 1fad58d4-1271-484a-aa8e-8f7f5dca764c r/w with ordered data mode. Quota mode: none. Sep 13 10:19:58.914168 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 10:19:58.915151 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 10:19:58.917512 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 10:19:58.919937 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 10:19:58.921932 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 10:19:58.921994 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 10:19:58.923654 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 10:19:58.938879 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 10:19:58.942547 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 10:19:58.947208 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (891) Sep 13 10:19:58.947233 kernel: BTRFS info (device vda6): first mount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:19:58.947252 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:19:58.950087 kernel: BTRFS info (device vda6): turning on async discard Sep 13 10:19:58.950129 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 10:19:58.952673 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 10:19:58.990754 initrd-setup-root[915]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 10:19:58.996251 initrd-setup-root[922]: cut: /sysroot/etc/group: No such file or directory Sep 13 10:19:59.001455 initrd-setup-root[929]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 10:19:59.007006 initrd-setup-root[936]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 10:19:59.102383 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 10:19:59.105497 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 10:19:59.108058 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 10:19:59.130547 kernel: BTRFS info (device vda6): last unmount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:19:59.142679 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 10:19:59.161590 ignition[1005]: INFO : Ignition 2.22.0 Sep 13 10:19:59.161590 ignition[1005]: INFO : Stage: mount Sep 13 10:19:59.163394 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 10:19:59.163394 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:19:59.163394 ignition[1005]: INFO : mount: mount passed Sep 13 10:19:59.163394 ignition[1005]: INFO : Ignition finished successfully Sep 13 10:19:59.165021 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 10:19:59.170746 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 10:19:59.173886 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 10:19:59.207162 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 10:19:59.234354 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1017) Sep 13 10:19:59.234392 kernel: BTRFS info (device vda6): first mount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:19:59.234404 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:19:59.238730 kernel: BTRFS info (device vda6): turning on async discard Sep 13 10:19:59.238808 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 10:19:59.240633 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 10:19:59.272701 ignition[1034]: INFO : Ignition 2.22.0 Sep 13 10:19:59.272701 ignition[1034]: INFO : Stage: files Sep 13 10:19:59.274627 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 10:19:59.274627 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:19:59.274627 ignition[1034]: DEBUG : files: compiled without relabeling support, skipping Sep 13 10:19:59.278484 ignition[1034]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 10:19:59.278484 ignition[1034]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 10:19:59.283030 ignition[1034]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 10:19:59.284590 ignition[1034]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 10:19:59.286409 unknown[1034]: wrote ssh authorized keys file for user: core Sep 13 10:19:59.287649 ignition[1034]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 10:19:59.290194 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 10:19:59.292280 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 13 10:19:59.328889 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 10:19:59.424432 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 10:19:59.424432 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 10:19:59.428274 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 10:19:59.643538 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 10:19:59.699776 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 10:19:59.699776 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 10:19:59.703179 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 10:19:59.703179 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 10:19:59.703179 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 10:19:59.708053 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 10:19:59.709806 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 10:19:59.711422 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 10:19:59.713085 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 10:19:59.718439 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 10:19:59.720350 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 10:19:59.722423 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 10:19:59.725379 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 10:19:59.725379 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 10:19:59.725379 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 13 10:19:59.742691 systemd-networkd[859]: eth0: Gained IPv6LL Sep 13 10:20:00.076305 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 10:20:00.680178 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 10:20:00.680178 ignition[1034]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 13 10:20:00.683934 ignition[1034]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 10:20:00.689802 ignition[1034]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 10:20:00.689802 ignition[1034]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 13 10:20:00.689802 ignition[1034]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 13 10:20:00.694006 ignition[1034]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 10:20:00.694006 ignition[1034]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 10:20:00.694006 ignition[1034]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 13 10:20:00.694006 ignition[1034]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 10:20:00.716620 ignition[1034]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 10:20:00.722924 ignition[1034]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 10:20:00.724662 ignition[1034]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 10:20:00.724662 ignition[1034]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 13 10:20:00.724662 ignition[1034]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 10:20:00.724662 ignition[1034]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 10:20:00.724662 ignition[1034]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 10:20:00.724662 ignition[1034]: INFO : files: files passed Sep 13 10:20:00.724662 ignition[1034]: INFO : Ignition finished successfully Sep 13 10:20:00.727249 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 10:20:00.734931 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 10:20:00.744612 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 10:20:00.748272 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 10:20:00.748410 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 10:20:00.770252 initrd-setup-root-after-ignition[1063]: grep: /sysroot/oem/oem-release: No such file or directory Sep 13 10:20:00.774911 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 10:20:00.776648 initrd-setup-root-after-ignition[1065]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 10:20:00.778230 initrd-setup-root-after-ignition[1069]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 10:20:00.779221 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 10:20:00.780358 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 10:20:00.783265 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 10:20:00.831269 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 10:20:00.831418 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 10:20:00.832432 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 10:20:00.834858 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 10:20:00.837065 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 10:20:00.840251 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 10:20:00.871186 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 10:20:00.872965 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 10:20:00.903356 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 10:20:00.903930 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 10:20:00.904288 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 10:20:00.904820 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 10:20:00.904938 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 10:20:00.911349 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 10:20:00.913357 systemd[1]: Stopped target basic.target - Basic System. Sep 13 10:20:00.915132 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 10:20:00.915847 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 10:20:00.916184 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 10:20:00.916540 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 13 10:20:00.922687 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 10:20:00.923236 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 10:20:00.923596 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 10:20:00.924085 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 10:20:00.924414 systemd[1]: Stopped target swap.target - Swaps. Sep 13 10:20:00.924891 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 10:20:00.925008 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 10:20:00.933261 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 10:20:00.933621 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 10:20:00.934063 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 10:20:00.939009 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 10:20:00.939597 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 10:20:00.939705 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 10:20:00.940421 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 10:20:00.940544 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 10:20:00.940999 systemd[1]: Stopped target paths.target - Path Units. Sep 13 10:20:00.941248 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 10:20:00.951597 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 10:20:00.952106 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 10:20:00.952434 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 10:20:00.952949 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 10:20:00.953043 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 10:20:00.957846 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 10:20:00.957933 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 10:20:00.959543 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 10:20:00.959659 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 10:20:00.960026 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 10:20:00.960132 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 10:20:00.968844 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 10:20:00.994641 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 10:20:00.995922 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 10:20:00.996264 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 10:20:00.998982 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 10:20:00.999125 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 10:20:01.005557 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 10:20:01.005734 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 10:20:01.014509 ignition[1089]: INFO : Ignition 2.22.0 Sep 13 10:20:01.014509 ignition[1089]: INFO : Stage: umount Sep 13 10:20:01.016637 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 10:20:01.016637 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:20:01.016637 ignition[1089]: INFO : umount: umount passed Sep 13 10:20:01.016637 ignition[1089]: INFO : Ignition finished successfully Sep 13 10:20:01.018184 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 10:20:01.018337 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 10:20:01.020173 systemd[1]: Stopped target network.target - Network. Sep 13 10:20:01.021881 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 10:20:01.021964 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 10:20:01.023604 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 10:20:01.023666 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 10:20:01.024036 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 10:20:01.024095 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 10:20:01.024365 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 10:20:01.024421 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 10:20:01.025026 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 10:20:01.030740 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 10:20:01.032367 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 10:20:01.039848 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 10:20:01.039978 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 10:20:01.043224 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 13 10:20:01.043466 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 10:20:01.043602 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 10:20:01.049846 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 13 10:20:01.051722 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 13 10:20:01.053805 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 10:20:01.053867 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 10:20:01.056845 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 10:20:01.058606 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 10:20:01.058689 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 10:20:01.059284 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 10:20:01.059340 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:20:01.064699 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 10:20:01.064792 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 10:20:01.065175 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 10:20:01.065240 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 10:20:01.071019 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 10:20:01.073182 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 10:20:01.073256 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 13 10:20:01.089333 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 10:20:01.092699 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 10:20:01.093428 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 10:20:01.093473 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 10:20:01.095266 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 10:20:01.095307 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 10:20:01.097291 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 10:20:01.097341 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 10:20:01.101255 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 10:20:01.101373 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 10:20:01.103536 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 10:20:01.103613 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 10:20:01.105346 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 10:20:01.109968 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 13 10:20:01.110032 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 10:20:01.113060 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 10:20:01.113108 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 10:20:01.116391 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 10:20:01.116438 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 10:20:01.119662 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 10:20:01.119710 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 10:20:01.120262 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 10:20:01.120305 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:20:01.126230 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 13 10:20:01.126290 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 13 10:20:01.126334 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 13 10:20:01.126383 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 13 10:20:01.126786 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 10:20:01.131674 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 10:20:01.138918 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 10:20:01.139041 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 10:20:01.279181 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 10:20:01.279325 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 10:20:01.280219 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 10:20:01.282252 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 10:20:01.282309 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 10:20:01.286693 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 10:20:01.320461 systemd[1]: Switching root. Sep 13 10:20:01.360233 systemd-journald[220]: Journal stopped Sep 13 10:20:02.839207 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 13 10:20:02.839290 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 10:20:02.839305 kernel: SELinux: policy capability open_perms=1 Sep 13 10:20:02.839326 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 10:20:02.839338 kernel: SELinux: policy capability always_check_network=0 Sep 13 10:20:02.839349 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 10:20:02.839365 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 10:20:02.839376 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 10:20:02.839388 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 10:20:02.839399 kernel: SELinux: policy capability userspace_initial_context=0 Sep 13 10:20:02.839410 kernel: audit: type=1403 audit(1757758801.934:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 10:20:02.839437 systemd[1]: Successfully loaded SELinux policy in 67.230ms. Sep 13 10:20:02.839458 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.507ms. Sep 13 10:20:02.839476 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 10:20:02.839489 systemd[1]: Detected virtualization kvm. Sep 13 10:20:02.839501 systemd[1]: Detected architecture x86-64. Sep 13 10:20:02.839529 systemd[1]: Detected first boot. Sep 13 10:20:02.839542 systemd[1]: Initializing machine ID from VM UUID. Sep 13 10:20:02.839555 zram_generator::config[1135]: No configuration found. Sep 13 10:20:02.839568 kernel: Guest personality initialized and is inactive Sep 13 10:20:02.839586 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 13 10:20:02.839606 kernel: Initialized host personality Sep 13 10:20:02.839618 kernel: NET: Registered PF_VSOCK protocol family Sep 13 10:20:02.839630 systemd[1]: Populated /etc with preset unit settings. Sep 13 10:20:02.839643 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 13 10:20:02.839655 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 10:20:02.839668 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 10:20:02.839679 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 10:20:02.839693 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 10:20:02.839710 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 10:20:02.839731 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 10:20:02.839743 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 10:20:02.839756 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 10:20:02.839769 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 10:20:02.839781 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 10:20:02.839794 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 10:20:02.839806 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 10:20:02.839824 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 10:20:02.839836 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 10:20:02.839848 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 10:20:02.839873 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 10:20:02.839890 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 10:20:02.839912 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 10:20:02.839924 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 10:20:02.839936 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 10:20:02.839954 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 10:20:02.839969 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 10:20:02.839986 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 10:20:02.839999 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 10:20:02.840012 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 10:20:02.840024 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 10:20:02.840037 systemd[1]: Reached target slices.target - Slice Units. Sep 13 10:20:02.840049 systemd[1]: Reached target swap.target - Swaps. Sep 13 10:20:02.840061 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 10:20:02.840079 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 10:20:02.840092 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 13 10:20:02.840104 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 10:20:02.840116 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 10:20:02.840129 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 10:20:02.840143 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 10:20:02.840158 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 10:20:02.840170 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 10:20:02.840182 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 10:20:02.840232 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:20:02.840246 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 10:20:02.840259 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 10:20:02.840274 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 10:20:02.840297 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 10:20:02.840310 systemd[1]: Reached target machines.target - Containers. Sep 13 10:20:02.840330 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 10:20:02.840344 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 10:20:02.840362 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 10:20:02.840381 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 10:20:02.840406 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 10:20:02.840430 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 10:20:02.840443 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 10:20:02.840455 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 10:20:02.840468 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 10:20:02.840480 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 10:20:02.840492 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 10:20:02.840509 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 10:20:02.840600 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 10:20:02.840628 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 10:20:02.840651 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 10:20:02.840676 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 10:20:02.840698 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 10:20:02.840729 kernel: loop: module loaded Sep 13 10:20:02.840761 kernel: fuse: init (API version 7.41) Sep 13 10:20:02.840795 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 10:20:02.840820 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 10:20:02.840833 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 13 10:20:02.840846 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 10:20:02.840873 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 10:20:02.840887 systemd[1]: Stopped verity-setup.service. Sep 13 10:20:02.840907 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:20:02.840921 kernel: ACPI: bus type drm_connector registered Sep 13 10:20:02.840932 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 10:20:02.840944 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 10:20:02.840956 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 10:20:02.840987 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 10:20:02.841041 systemd-journald[1206]: Collecting audit messages is disabled. Sep 13 10:20:02.841076 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 10:20:02.841090 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 10:20:02.841102 systemd-journald[1206]: Journal started Sep 13 10:20:02.841125 systemd-journald[1206]: Runtime Journal (/run/log/journal/6fc6f5bd7117407db1572d9d59f6a1bf) is 6M, max 48.4M, 42.4M free. Sep 13 10:20:02.545043 systemd[1]: Queued start job for default target multi-user.target. Sep 13 10:20:02.571264 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 10:20:02.572458 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 10:20:02.842728 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 10:20:02.844496 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 10:20:02.846128 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 10:20:02.847762 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 10:20:02.848005 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 10:20:02.849503 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 10:20:02.849758 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 10:20:02.851202 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 10:20:02.851446 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 10:20:02.852879 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 10:20:02.853126 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 10:20:02.854668 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 10:20:02.854904 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 10:20:02.856480 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 10:20:02.856923 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 10:20:02.858567 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 10:20:02.860218 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 10:20:02.862202 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 10:20:02.863997 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 13 10:20:02.879820 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 10:20:02.882502 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 10:20:02.884959 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 10:20:02.886226 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 10:20:02.886257 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 10:20:02.888571 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 13 10:20:02.897808 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 10:20:02.899152 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 10:20:02.900706 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 10:20:02.903630 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 10:20:02.904969 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 10:20:02.908322 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 10:20:02.909567 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 10:20:02.910800 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 10:20:02.917982 systemd-journald[1206]: Time spent on flushing to /var/log/journal/6fc6f5bd7117407db1572d9d59f6a1bf is 14.205ms for 1078 entries. Sep 13 10:20:02.917982 systemd-journald[1206]: System Journal (/var/log/journal/6fc6f5bd7117407db1572d9d59f6a1bf) is 8M, max 195.6M, 187.6M free. Sep 13 10:20:03.019743 systemd-journald[1206]: Received client request to flush runtime journal. Sep 13 10:20:02.917644 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 10:20:03.010886 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 10:20:03.016114 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 10:20:03.021334 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 10:20:03.024730 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 10:20:03.026652 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 10:20:03.030782 kernel: loop0: detected capacity change from 0 to 229808 Sep 13 10:20:03.034634 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 10:20:03.036797 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 10:20:03.040783 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 13 10:20:03.045622 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Sep 13 10:20:03.045637 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Sep 13 10:20:03.047494 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:20:03.049191 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 10:20:03.058321 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 10:20:03.061502 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 10:20:03.069589 kernel: loop1: detected capacity change from 0 to 110984 Sep 13 10:20:03.083495 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 13 10:20:03.107708 kernel: loop2: detected capacity change from 0 to 128016 Sep 13 10:20:03.108205 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 10:20:03.113677 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 10:20:03.268967 kernel: loop3: detected capacity change from 0 to 229808 Sep 13 10:20:03.269318 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Sep 13 10:20:03.269340 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Sep 13 10:20:03.275409 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 10:20:03.286540 kernel: loop4: detected capacity change from 0 to 110984 Sep 13 10:20:03.295569 kernel: loop5: detected capacity change from 0 to 128016 Sep 13 10:20:03.301689 (sd-merge)[1279]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 13 10:20:03.302282 (sd-merge)[1279]: Merged extensions into '/usr'. Sep 13 10:20:03.308197 systemd[1]: Reload requested from client PID 1254 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 10:20:03.308221 systemd[1]: Reloading... Sep 13 10:20:03.467548 zram_generator::config[1310]: No configuration found. Sep 13 10:20:03.727287 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 10:20:03.727687 systemd[1]: Reloading finished in 418 ms. Sep 13 10:20:03.768491 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 10:20:03.773174 ldconfig[1249]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 10:20:03.783866 systemd[1]: Starting ensure-sysext.service... Sep 13 10:20:03.785801 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 10:20:03.808765 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 10:20:03.829456 systemd[1]: Reload requested from client PID 1342 ('systemctl') (unit ensure-sysext.service)... Sep 13 10:20:03.829471 systemd[1]: Reloading... Sep 13 10:20:03.836845 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 13 10:20:03.837244 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 13 10:20:03.837743 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 10:20:03.838634 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 10:20:03.839715 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 10:20:03.839996 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Sep 13 10:20:03.840072 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Sep 13 10:20:03.844607 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 10:20:03.844620 systemd-tmpfiles[1343]: Skipping /boot Sep 13 10:20:03.855186 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 10:20:03.855203 systemd-tmpfiles[1343]: Skipping /boot Sep 13 10:20:03.892549 zram_generator::config[1371]: No configuration found. Sep 13 10:20:04.108757 systemd[1]: Reloading finished in 278 ms. Sep 13 10:20:04.121769 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 10:20:04.123454 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 10:20:04.151632 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 10:20:04.154203 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 10:20:04.156544 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 10:20:04.168281 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 10:20:04.172488 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 10:20:04.177813 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 10:20:04.181508 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:20:04.181713 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 10:20:04.187823 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 10:20:04.190144 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 10:20:04.194091 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 10:20:04.195284 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 10:20:04.195394 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 10:20:04.195481 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:20:04.196670 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 10:20:04.198508 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 10:20:04.198811 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 10:20:04.202482 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 10:20:04.203355 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 10:20:04.215397 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 10:20:04.217298 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 10:20:04.217782 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 10:20:04.221156 systemd-udevd[1415]: Using default interface naming scheme 'v255'. Sep 13 10:20:04.225440 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:20:04.226870 augenrules[1443]: No rules Sep 13 10:20:04.225782 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 10:20:04.227341 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 10:20:04.229948 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 10:20:04.232995 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 10:20:04.244494 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 10:20:04.245885 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 10:20:04.246086 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 10:20:04.248904 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 10:20:04.252417 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 10:20:04.254568 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:20:04.256003 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 10:20:04.257976 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 10:20:04.258242 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 10:20:04.331121 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 10:20:04.333395 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 10:20:04.333664 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 10:20:04.335406 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 10:20:04.335644 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 10:20:04.337178 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 10:20:04.337399 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 10:20:04.339561 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 10:20:04.339795 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 10:20:04.349406 systemd[1]: Finished ensure-sysext.service. Sep 13 10:20:04.355713 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 10:20:04.358461 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 10:20:04.360570 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 10:20:04.360636 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 10:20:04.365667 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 10:20:04.366876 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 10:20:04.507594 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 10:20:04.541278 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 10:20:04.615082 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 10:20:04.618538 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 10:20:04.637557 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 10:20:04.641595 kernel: ACPI: button: Power Button [PWRF] Sep 13 10:20:04.641874 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 10:20:04.651038 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 10:20:04.774721 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 13 10:20:04.775069 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 10:20:04.776736 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 10:20:04.801989 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:20:04.844156 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 10:20:04.844451 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:20:04.848498 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:20:04.939980 kernel: kvm_amd: TSC scaling supported Sep 13 10:20:04.940055 kernel: kvm_amd: Nested Virtualization enabled Sep 13 10:20:04.940068 kernel: kvm_amd: Nested Paging enabled Sep 13 10:20:04.941039 kernel: kvm_amd: LBR virtualization supported Sep 13 10:20:04.941064 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 13 10:20:04.941569 kernel: kvm_amd: Virtual GIF supported Sep 13 10:20:04.972281 systemd-networkd[1490]: lo: Link UP Sep 13 10:20:04.972293 systemd-networkd[1490]: lo: Gained carrier Sep 13 10:20:04.974449 systemd-networkd[1490]: Enumeration completed Sep 13 10:20:04.974940 systemd-networkd[1490]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:20:04.974945 systemd-networkd[1490]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 10:20:04.975060 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 10:20:04.975610 systemd-networkd[1490]: eth0: Link UP Sep 13 10:20:04.976052 systemd-networkd[1490]: eth0: Gained carrier Sep 13 10:20:04.976072 systemd-networkd[1490]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:20:04.976617 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 10:20:04.978348 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:20:04.979862 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 10:20:04.982814 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 13 10:20:04.985579 systemd-networkd[1490]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 10:20:04.985744 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 10:20:04.986290 systemd-timesyncd[1491]: Network configuration changed, trying to establish connection. Sep 13 10:20:06.113291 systemd-timesyncd[1491]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 10:20:06.113330 systemd-timesyncd[1491]: Initial clock synchronization to Sat 2025-09-13 10:20:06.113210 UTC. Sep 13 10:20:06.127519 kernel: EDAC MC: Ver: 3.0.0 Sep 13 10:20:06.131258 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 13 10:20:06.141829 systemd-resolved[1413]: Positive Trust Anchors: Sep 13 10:20:06.142164 systemd-resolved[1413]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 10:20:06.142245 systemd-resolved[1413]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 10:20:06.146408 systemd-resolved[1413]: Defaulting to hostname 'linux'. Sep 13 10:20:06.148173 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 10:20:06.149366 systemd[1]: Reached target network.target - Network. Sep 13 10:20:06.150283 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 10:20:06.151441 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 10:20:06.152624 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 10:20:06.153855 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 10:20:06.155093 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 13 10:20:06.156392 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 10:20:06.157630 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 10:20:06.158864 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 10:20:06.160067 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 10:20:06.160099 systemd[1]: Reached target paths.target - Path Units. Sep 13 10:20:06.160975 systemd[1]: Reached target timers.target - Timer Units. Sep 13 10:20:06.163124 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 10:20:06.165998 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 10:20:06.169098 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 13 10:20:06.170467 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 13 10:20:06.171729 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 13 10:20:06.184073 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 10:20:06.185530 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 13 10:20:06.187399 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 10:20:06.189185 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 10:20:06.190156 systemd[1]: Reached target basic.target - Basic System. Sep 13 10:20:06.191117 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 10:20:06.191149 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 10:20:06.192193 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 10:20:06.194306 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 10:20:06.196223 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 10:20:06.198366 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 10:20:06.201484 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 10:20:06.201870 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 10:20:06.203861 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 13 10:20:06.207615 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 10:20:06.209622 jq[1546]: false Sep 13 10:20:06.209715 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 10:20:06.212749 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 10:20:06.215066 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 10:20:06.215413 oslogin_cache_refresh[1548]: Refreshing passwd entry cache Sep 13 10:20:06.216862 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Refreshing passwd entry cache Sep 13 10:20:06.220352 extend-filesystems[1547]: Found /dev/vda6 Sep 13 10:20:06.223757 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Failure getting users, quitting Sep 13 10:20:06.223757 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 10:20:06.223750 oslogin_cache_refresh[1548]: Failure getting users, quitting Sep 13 10:20:06.223990 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Refreshing group entry cache Sep 13 10:20:06.223766 oslogin_cache_refresh[1548]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 10:20:06.223813 oslogin_cache_refresh[1548]: Refreshing group entry cache Sep 13 10:20:06.224792 extend-filesystems[1547]: Found /dev/vda9 Sep 13 10:20:06.227155 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 10:20:06.227908 extend-filesystems[1547]: Checking size of /dev/vda9 Sep 13 10:20:06.229988 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 10:20:06.230061 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Failure getting groups, quitting Sep 13 10:20:06.230061 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 10:20:06.230016 oslogin_cache_refresh[1548]: Failure getting groups, quitting Sep 13 10:20:06.230027 oslogin_cache_refresh[1548]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 10:20:06.230654 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 10:20:06.231670 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 10:20:06.234678 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 10:20:06.237999 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 10:20:06.239992 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 10:20:06.240230 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 10:20:06.240751 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 13 10:20:06.240992 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 13 10:20:06.242928 extend-filesystems[1547]: Resized partition /dev/vda9 Sep 13 10:20:06.242670 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 10:20:06.247866 jq[1568]: true Sep 13 10:20:06.248185 extend-filesystems[1573]: resize2fs 1.47.3 (8-Jul-2025) Sep 13 10:20:06.243224 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 10:20:06.249712 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 10:20:06.249959 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 10:20:06.254982 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 10:20:06.265767 update_engine[1566]: I20250913 10:20:06.265664 1566 main.cc:92] Flatcar Update Engine starting Sep 13 10:20:06.274934 (ntainerd)[1578]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 10:20:06.285824 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 10:20:06.286058 jq[1576]: true Sep 13 10:20:06.292356 tar[1574]: linux-amd64/LICENSE Sep 13 10:20:06.312208 tar[1574]: linux-amd64/helm Sep 13 10:20:06.313206 extend-filesystems[1573]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 10:20:06.313206 extend-filesystems[1573]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 10:20:06.313206 extend-filesystems[1573]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 10:20:06.316602 extend-filesystems[1547]: Resized filesystem in /dev/vda9 Sep 13 10:20:06.318445 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 10:20:06.318759 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 10:20:06.325725 dbus-daemon[1544]: [system] SELinux support is enabled Sep 13 10:20:06.326588 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 10:20:06.333895 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 10:20:06.333929 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 10:20:06.336675 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 10:20:06.336712 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 10:20:06.340964 update_engine[1566]: I20250913 10:20:06.339815 1566 update_check_scheduler.cc:74] Next update check in 4m36s Sep 13 10:20:06.340262 systemd[1]: Started update-engine.service - Update Engine. Sep 13 10:20:06.345906 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 10:20:06.383205 systemd-logind[1558]: Watching system buttons on /dev/input/event2 (Power Button) Sep 13 10:20:06.383237 systemd-logind[1558]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 10:20:06.383608 systemd-logind[1558]: New seat seat0. Sep 13 10:20:06.385144 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 10:20:06.401381 bash[1610]: Updated "/home/core/.ssh/authorized_keys" Sep 13 10:20:06.402854 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 10:20:06.405221 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 10:20:06.465272 locksmithd[1606]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 10:20:06.625927 sshd_keygen[1575]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 10:20:06.731822 containerd[1578]: time="2025-09-13T10:20:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 13 10:20:06.732141 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 10:20:06.734525 containerd[1578]: time="2025-09-13T10:20:06.734470939Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 13 10:20:06.740990 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 10:20:06.744453 containerd[1578]: time="2025-09-13T10:20:06.744410168Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.728µs" Sep 13 10:20:06.744550 containerd[1578]: time="2025-09-13T10:20:06.744534782Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 13 10:20:06.744609 containerd[1578]: time="2025-09-13T10:20:06.744597309Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 13 10:20:06.744889 containerd[1578]: time="2025-09-13T10:20:06.744871063Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 13 10:20:06.744951 containerd[1578]: time="2025-09-13T10:20:06.744938379Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 13 10:20:06.745027 containerd[1578]: time="2025-09-13T10:20:06.745013750Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 10:20:06.745151 containerd[1578]: time="2025-09-13T10:20:06.745132543Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 10:20:06.745204 containerd[1578]: time="2025-09-13T10:20:06.745192385Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 10:20:06.745589 containerd[1578]: time="2025-09-13T10:20:06.745568320Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 10:20:06.745673 containerd[1578]: time="2025-09-13T10:20:06.745657758Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 10:20:06.745743 containerd[1578]: time="2025-09-13T10:20:06.745726848Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 10:20:06.745815 containerd[1578]: time="2025-09-13T10:20:06.745791719Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 13 10:20:06.745923 containerd[1578]: time="2025-09-13T10:20:06.745906124Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 13 10:20:06.746186 containerd[1578]: time="2025-09-13T10:20:06.746157105Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 10:20:06.746209 containerd[1578]: time="2025-09-13T10:20:06.746195016Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 10:20:06.746209 containerd[1578]: time="2025-09-13T10:20:06.746204343Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 13 10:20:06.746262 containerd[1578]: time="2025-09-13T10:20:06.746253956Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 13 10:20:06.746525 containerd[1578]: time="2025-09-13T10:20:06.746486021Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 13 10:20:06.746585 containerd[1578]: time="2025-09-13T10:20:06.746571472Z" level=info msg="metadata content store policy set" policy=shared Sep 13 10:20:06.752048 containerd[1578]: time="2025-09-13T10:20:06.752000854Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 13 10:20:06.752048 containerd[1578]: time="2025-09-13T10:20:06.752063942Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 13 10:20:06.752048 containerd[1578]: time="2025-09-13T10:20:06.752084140Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 13 10:20:06.752439 containerd[1578]: time="2025-09-13T10:20:06.752097204Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 13 10:20:06.752439 containerd[1578]: time="2025-09-13T10:20:06.752111551Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 13 10:20:06.752439 containerd[1578]: time="2025-09-13T10:20:06.752127521Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 13 10:20:06.752439 containerd[1578]: time="2025-09-13T10:20:06.752140926Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 13 10:20:06.752439 containerd[1578]: time="2025-09-13T10:20:06.752154091Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 13 10:20:06.752439 containerd[1578]: time="2025-09-13T10:20:06.752166474Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 13 10:20:06.752439 containerd[1578]: time="2025-09-13T10:20:06.752177124Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 13 10:20:06.752439 containerd[1578]: time="2025-09-13T10:20:06.752186652Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 13 10:20:06.752439 containerd[1578]: time="2025-09-13T10:20:06.752202161Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 13 10:20:06.752439 containerd[1578]: time="2025-09-13T10:20:06.752417585Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 13 10:20:06.752439 containerd[1578]: time="2025-09-13T10:20:06.752445467Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 13 10:20:06.752767 containerd[1578]: time="2025-09-13T10:20:06.752461247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 13 10:20:06.752767 containerd[1578]: time="2025-09-13T10:20:06.752472959Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 13 10:20:06.752767 containerd[1578]: time="2025-09-13T10:20:06.752484390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 13 10:20:06.752767 containerd[1578]: time="2025-09-13T10:20:06.752512854Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 13 10:20:06.752767 containerd[1578]: time="2025-09-13T10:20:06.752525267Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 13 10:20:06.752767 containerd[1578]: time="2025-09-13T10:20:06.752539364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 13 10:20:06.752767 containerd[1578]: time="2025-09-13T10:20:06.752549973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 13 10:20:06.752767 containerd[1578]: time="2025-09-13T10:20:06.752560904Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 13 10:20:06.752767 containerd[1578]: time="2025-09-13T10:20:06.752577275Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 13 10:20:06.752767 containerd[1578]: time="2025-09-13T10:20:06.752669658Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 13 10:20:06.752767 containerd[1578]: time="2025-09-13T10:20:06.752689335Z" level=info msg="Start snapshots syncer" Sep 13 10:20:06.752767 containerd[1578]: time="2025-09-13T10:20:06.752724781Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 13 10:20:06.753314 containerd[1578]: time="2025-09-13T10:20:06.753006129Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 13 10:20:06.753314 containerd[1578]: time="2025-09-13T10:20:06.753066552Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 13 10:20:06.753631 containerd[1578]: time="2025-09-13T10:20:06.753167942Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 13 10:20:06.753631 containerd[1578]: time="2025-09-13T10:20:06.753280564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 13 10:20:06.753631 containerd[1578]: time="2025-09-13T10:20:06.753299549Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 13 10:20:06.753631 containerd[1578]: time="2025-09-13T10:20:06.753309538Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 13 10:20:06.753631 containerd[1578]: time="2025-09-13T10:20:06.753321350Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 13 10:20:06.753631 containerd[1578]: time="2025-09-13T10:20:06.753336438Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 13 10:20:06.753631 containerd[1578]: time="2025-09-13T10:20:06.753346938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 13 10:20:06.753631 containerd[1578]: time="2025-09-13T10:20:06.753360453Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 13 10:20:06.753631 containerd[1578]: time="2025-09-13T10:20:06.753383366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 13 10:20:06.753631 containerd[1578]: time="2025-09-13T10:20:06.753408534Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 13 10:20:06.753631 containerd[1578]: time="2025-09-13T10:20:06.753421087Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 13 10:20:06.753631 containerd[1578]: time="2025-09-13T10:20:06.753454910Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 10:20:06.753631 containerd[1578]: time="2025-09-13T10:20:06.753468135Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 10:20:06.753631 containerd[1578]: time="2025-09-13T10:20:06.753475579Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 10:20:06.753959 containerd[1578]: time="2025-09-13T10:20:06.753484376Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 10:20:06.753959 containerd[1578]: time="2025-09-13T10:20:06.753491990Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 13 10:20:06.753959 containerd[1578]: time="2025-09-13T10:20:06.753519802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 13 10:20:06.753959 containerd[1578]: time="2025-09-13T10:20:06.753530392Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 13 10:20:06.753959 containerd[1578]: time="2025-09-13T10:20:06.753557232Z" level=info msg="runtime interface created" Sep 13 10:20:06.753959 containerd[1578]: time="2025-09-13T10:20:06.753563775Z" level=info msg="created NRI interface" Sep 13 10:20:06.753959 containerd[1578]: time="2025-09-13T10:20:06.753571720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 13 10:20:06.753959 containerd[1578]: time="2025-09-13T10:20:06.753581879Z" level=info msg="Connect containerd service" Sep 13 10:20:06.753959 containerd[1578]: time="2025-09-13T10:20:06.753603619Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 10:20:06.755210 containerd[1578]: time="2025-09-13T10:20:06.755125534Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 10:20:06.761551 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 10:20:06.761850 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 10:20:06.765887 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 10:20:06.793297 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 10:20:06.797291 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 10:20:06.805321 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 10:20:06.806708 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 10:20:06.905336 tar[1574]: linux-amd64/README.md Sep 13 10:20:06.928146 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 10:20:06.928410 containerd[1578]: time="2025-09-13T10:20:06.928282214Z" level=info msg="Start subscribing containerd event" Sep 13 10:20:06.928410 containerd[1578]: time="2025-09-13T10:20:06.928378204Z" level=info msg="Start recovering state" Sep 13 10:20:06.928410 containerd[1578]: time="2025-09-13T10:20:06.928391980Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 10:20:06.928488 containerd[1578]: time="2025-09-13T10:20:06.928467391Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 10:20:06.928584 containerd[1578]: time="2025-09-13T10:20:06.928565425Z" level=info msg="Start event monitor" Sep 13 10:20:06.928608 containerd[1578]: time="2025-09-13T10:20:06.928598357Z" level=info msg="Start cni network conf syncer for default" Sep 13 10:20:06.928638 containerd[1578]: time="2025-09-13T10:20:06.928612323Z" level=info msg="Start streaming server" Sep 13 10:20:06.928672 containerd[1578]: time="2025-09-13T10:20:06.928647830Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 13 10:20:06.928672 containerd[1578]: time="2025-09-13T10:20:06.928659221Z" level=info msg="runtime interface starting up..." Sep 13 10:20:06.928672 containerd[1578]: time="2025-09-13T10:20:06.928667547Z" level=info msg="starting plugins..." Sep 13 10:20:06.928728 containerd[1578]: time="2025-09-13T10:20:06.928699917Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 13 10:20:06.928890 containerd[1578]: time="2025-09-13T10:20:06.928863805Z" level=info msg="containerd successfully booted in 0.197713s" Sep 13 10:20:06.929686 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 10:20:07.844791 systemd-networkd[1490]: eth0: Gained IPv6LL Sep 13 10:20:07.848512 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 10:20:07.850990 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 10:20:07.854417 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 13 10:20:07.857180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:20:07.859377 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 10:20:07.889621 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 10:20:07.891235 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 10:20:07.891548 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 13 10:20:07.894004 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 10:20:09.212329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:20:09.214251 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 10:20:09.216671 systemd[1]: Startup finished in 4.074s (kernel) + 6.286s (initrd) + 6.221s (userspace) = 16.582s. Sep 13 10:20:09.224935 (kubelet)[1679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 10:20:09.855317 kubelet[1679]: E0913 10:20:09.855236 1679 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 10:20:09.860384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 10:20:09.860621 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 10:20:09.861050 systemd[1]: kubelet.service: Consumed 1.803s CPU time, 266.9M memory peak. Sep 13 10:20:10.448725 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 10:20:10.450000 systemd[1]: Started sshd@0-10.0.0.73:22-10.0.0.1:35572.service - OpenSSH per-connection server daemon (10.0.0.1:35572). Sep 13 10:20:10.523075 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 35572 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:20:10.524857 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:20:10.531770 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 10:20:10.533095 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 10:20:10.539553 systemd-logind[1558]: New session 1 of user core. Sep 13 10:20:10.558201 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 10:20:10.561623 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 10:20:10.589053 (systemd)[1697]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 10:20:10.591985 systemd-logind[1558]: New session c1 of user core. Sep 13 10:20:10.770727 systemd[1697]: Queued start job for default target default.target. Sep 13 10:20:10.789227 systemd[1697]: Created slice app.slice - User Application Slice. Sep 13 10:20:10.789260 systemd[1697]: Reached target paths.target - Paths. Sep 13 10:20:10.789311 systemd[1697]: Reached target timers.target - Timers. Sep 13 10:20:10.791082 systemd[1697]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 10:20:10.803545 systemd[1697]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 10:20:10.803707 systemd[1697]: Reached target sockets.target - Sockets. Sep 13 10:20:10.803763 systemd[1697]: Reached target basic.target - Basic System. Sep 13 10:20:10.803805 systemd[1697]: Reached target default.target - Main User Target. Sep 13 10:20:10.803843 systemd[1697]: Startup finished in 201ms. Sep 13 10:20:10.803999 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 10:20:10.805775 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 10:20:10.869767 systemd[1]: Started sshd@1-10.0.0.73:22-10.0.0.1:35588.service - OpenSSH per-connection server daemon (10.0.0.1:35588). Sep 13 10:20:10.928698 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 35588 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:20:10.929969 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:20:10.934317 systemd-logind[1558]: New session 2 of user core. Sep 13 10:20:10.941665 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 10:20:10.998279 sshd[1711]: Connection closed by 10.0.0.1 port 35588 Sep 13 10:20:10.998755 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Sep 13 10:20:11.015291 systemd[1]: sshd@1-10.0.0.73:22-10.0.0.1:35588.service: Deactivated successfully. Sep 13 10:20:11.017083 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 10:20:11.017826 systemd-logind[1558]: Session 2 logged out. Waiting for processes to exit. Sep 13 10:20:11.020600 systemd[1]: Started sshd@2-10.0.0.73:22-10.0.0.1:35604.service - OpenSSH per-connection server daemon (10.0.0.1:35604). Sep 13 10:20:11.021124 systemd-logind[1558]: Removed session 2. Sep 13 10:20:11.081435 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 35604 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:20:11.082848 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:20:11.087843 systemd-logind[1558]: New session 3 of user core. Sep 13 10:20:11.101698 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 10:20:11.152537 sshd[1720]: Connection closed by 10.0.0.1 port 35604 Sep 13 10:20:11.152998 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Sep 13 10:20:11.166271 systemd[1]: sshd@2-10.0.0.73:22-10.0.0.1:35604.service: Deactivated successfully. Sep 13 10:20:11.168054 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 10:20:11.168953 systemd-logind[1558]: Session 3 logged out. Waiting for processes to exit. Sep 13 10:20:11.171856 systemd[1]: Started sshd@3-10.0.0.73:22-10.0.0.1:35608.service - OpenSSH per-connection server daemon (10.0.0.1:35608). Sep 13 10:20:11.172429 systemd-logind[1558]: Removed session 3. Sep 13 10:20:11.223246 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 35608 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:20:11.224649 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:20:11.229309 systemd-logind[1558]: New session 4 of user core. Sep 13 10:20:11.239639 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 10:20:11.294164 sshd[1729]: Connection closed by 10.0.0.1 port 35608 Sep 13 10:20:11.294891 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Sep 13 10:20:11.308161 systemd[1]: sshd@3-10.0.0.73:22-10.0.0.1:35608.service: Deactivated successfully. Sep 13 10:20:11.310148 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 10:20:11.310908 systemd-logind[1558]: Session 4 logged out. Waiting for processes to exit. Sep 13 10:20:11.313772 systemd[1]: Started sshd@4-10.0.0.73:22-10.0.0.1:35614.service - OpenSSH per-connection server daemon (10.0.0.1:35614). Sep 13 10:20:11.314289 systemd-logind[1558]: Removed session 4. Sep 13 10:20:11.377708 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 35614 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:20:11.379012 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:20:11.384410 systemd-logind[1558]: New session 5 of user core. Sep 13 10:20:11.398846 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 10:20:11.458365 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 10:20:11.458718 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:20:11.481677 sudo[1739]: pam_unix(sudo:session): session closed for user root Sep 13 10:20:11.483572 sshd[1738]: Connection closed by 10.0.0.1 port 35614 Sep 13 10:20:11.483973 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Sep 13 10:20:11.499660 systemd[1]: sshd@4-10.0.0.73:22-10.0.0.1:35614.service: Deactivated successfully. Sep 13 10:20:11.502293 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 10:20:11.503247 systemd-logind[1558]: Session 5 logged out. Waiting for processes to exit. Sep 13 10:20:11.507143 systemd[1]: Started sshd@5-10.0.0.73:22-10.0.0.1:35622.service - OpenSSH per-connection server daemon (10.0.0.1:35622). Sep 13 10:20:11.508219 systemd-logind[1558]: Removed session 5. Sep 13 10:20:11.572912 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 35622 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:20:11.574420 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:20:11.578928 systemd-logind[1558]: New session 6 of user core. Sep 13 10:20:11.596828 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 10:20:11.651298 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 10:20:11.651651 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:20:11.660134 sudo[1750]: pam_unix(sudo:session): session closed for user root Sep 13 10:20:11.667179 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 13 10:20:11.667527 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:20:11.678371 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 10:20:11.722967 augenrules[1772]: No rules Sep 13 10:20:11.724757 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 10:20:11.725040 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 10:20:11.726239 sudo[1749]: pam_unix(sudo:session): session closed for user root Sep 13 10:20:11.727805 sshd[1748]: Connection closed by 10.0.0.1 port 35622 Sep 13 10:20:11.728228 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Sep 13 10:20:11.737067 systemd[1]: sshd@5-10.0.0.73:22-10.0.0.1:35622.service: Deactivated successfully. Sep 13 10:20:11.738837 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 10:20:11.739560 systemd-logind[1558]: Session 6 logged out. Waiting for processes to exit. Sep 13 10:20:11.742282 systemd[1]: Started sshd@6-10.0.0.73:22-10.0.0.1:35624.service - OpenSSH per-connection server daemon (10.0.0.1:35624). Sep 13 10:20:11.742908 systemd-logind[1558]: Removed session 6. Sep 13 10:20:11.801985 sshd[1781]: Accepted publickey for core from 10.0.0.1 port 35624 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:20:11.803295 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:20:11.807713 systemd-logind[1558]: New session 7 of user core. Sep 13 10:20:11.817651 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 10:20:11.873377 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 10:20:11.873802 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:20:12.630993 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 10:20:12.648826 (dockerd)[1806]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 10:20:13.166746 dockerd[1806]: time="2025-09-13T10:20:13.166653116Z" level=info msg="Starting up" Sep 13 10:20:13.167869 dockerd[1806]: time="2025-09-13T10:20:13.167837297Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 13 10:20:13.192732 dockerd[1806]: time="2025-09-13T10:20:13.192674566Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 13 10:20:13.855219 dockerd[1806]: time="2025-09-13T10:20:13.855160453Z" level=info msg="Loading containers: start." Sep 13 10:20:13.868534 kernel: Initializing XFRM netlink socket Sep 13 10:20:14.143742 systemd-networkd[1490]: docker0: Link UP Sep 13 10:20:14.150031 dockerd[1806]: time="2025-09-13T10:20:14.149974315Z" level=info msg="Loading containers: done." Sep 13 10:20:14.220924 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3513455304-merged.mount: Deactivated successfully. Sep 13 10:20:14.223154 dockerd[1806]: time="2025-09-13T10:20:14.223097617Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 10:20:14.223519 dockerd[1806]: time="2025-09-13T10:20:14.223201682Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 13 10:20:14.223519 dockerd[1806]: time="2025-09-13T10:20:14.223315956Z" level=info msg="Initializing buildkit" Sep 13 10:20:14.256155 dockerd[1806]: time="2025-09-13T10:20:14.256109786Z" level=info msg="Completed buildkit initialization" Sep 13 10:20:14.263255 dockerd[1806]: time="2025-09-13T10:20:14.263188482Z" level=info msg="Daemon has completed initialization" Sep 13 10:20:14.263435 dockerd[1806]: time="2025-09-13T10:20:14.263284742Z" level=info msg="API listen on /run/docker.sock" Sep 13 10:20:14.263472 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 10:20:15.309103 containerd[1578]: time="2025-09-13T10:20:15.309048803Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 13 10:20:17.339903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1027040348.mount: Deactivated successfully. Sep 13 10:20:18.732730 containerd[1578]: time="2025-09-13T10:20:18.732661619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:20:18.733350 containerd[1578]: time="2025-09-13T10:20:18.733305416Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 13 10:20:18.734545 containerd[1578]: time="2025-09-13T10:20:18.734458369Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:20:18.737008 containerd[1578]: time="2025-09-13T10:20:18.736968937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:20:18.738003 containerd[1578]: time="2025-09-13T10:20:18.737952742Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 3.428848895s" Sep 13 10:20:18.738044 containerd[1578]: time="2025-09-13T10:20:18.738004669Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 13 10:20:18.738844 containerd[1578]: time="2025-09-13T10:20:18.738807134Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 13 10:20:20.111513 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 10:20:20.135063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:20:20.953442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:20:20.957683 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 10:20:21.034887 kubelet[2087]: E0913 10:20:21.034782 2087 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 10:20:21.041766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 10:20:21.041996 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 10:20:21.042434 systemd[1]: kubelet.service: Consumed 375ms CPU time, 111.9M memory peak. Sep 13 10:20:21.526129 containerd[1578]: time="2025-09-13T10:20:21.526052574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:20:21.527000 containerd[1578]: time="2025-09-13T10:20:21.526899643Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 13 10:20:21.528245 containerd[1578]: time="2025-09-13T10:20:21.528189021Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:20:21.530920 containerd[1578]: time="2025-09-13T10:20:21.530850692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:20:21.532156 containerd[1578]: time="2025-09-13T10:20:21.532105496Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.793255962s" Sep 13 10:20:21.532242 containerd[1578]: time="2025-09-13T10:20:21.532169566Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 13 10:20:21.533518 containerd[1578]: time="2025-09-13T10:20:21.533374996Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 13 10:20:22.534938 containerd[1578]: time="2025-09-13T10:20:22.534875036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:20:22.535671 containerd[1578]: time="2025-09-13T10:20:22.535618821Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 13 10:20:22.536947 containerd[1578]: time="2025-09-13T10:20:22.536914792Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:20:22.539924 containerd[1578]: time="2025-09-13T10:20:22.539881896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:20:22.541153 containerd[1578]: time="2025-09-13T10:20:22.541097586Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.007686391s" Sep 13 10:20:22.541153 containerd[1578]: time="2025-09-13T10:20:22.541140526Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 13 10:20:22.541946 containerd[1578]: time="2025-09-13T10:20:22.541717448Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 10:20:23.592195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3024803792.mount: Deactivated successfully. Sep 13 10:20:24.340609 containerd[1578]: time="2025-09-13T10:20:24.340523131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:20:24.341212 containerd[1578]: time="2025-09-13T10:20:24.341166678Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 13 10:20:24.342349 containerd[1578]: time="2025-09-13T10:20:24.342308259Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:20:24.344296 containerd[1578]: time="2025-09-13T10:20:24.344257374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:20:24.344949 containerd[1578]: time="2025-09-13T10:20:24.344908525Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.803151513s" Sep 13 10:20:24.344949 containerd[1578]: time="2025-09-13T10:20:24.344943150Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 13 10:20:24.346543 containerd[1578]: time="2025-09-13T10:20:24.346327817Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 13 10:20:24.822719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3947790419.mount: Deactivated successfully. Sep 13 10:20:26.496707 containerd[1578]: time="2025-09-13T10:20:26.496628769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:20:26.497532 containerd[1578]: time="2025-09-13T10:20:26.497474495Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 13 10:20:26.498995 containerd[1578]: time="2025-09-13T10:20:26.498962976Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:20:26.504333 containerd[1578]: time="2025-09-13T10:20:26.504293884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:20:26.505263 containerd[1578]: time="2025-09-13T10:20:26.505229248Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.158867717s" Sep 13 10:20:26.505263 containerd[1578]: time="2025-09-13T10:20:26.505260036Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 13 10:20:26.506201 containerd[1578]: time="2025-09-13T10:20:26.506172106Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 10:20:26.997561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount158122674.mount: Deactivated successfully. Sep 13 10:20:27.003748 containerd[1578]: time="2025-09-13T10:20:27.003705148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 10:20:27.004590 containerd[1578]: time="2025-09-13T10:20:27.004521970Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 10:20:27.005795 containerd[1578]: time="2025-09-13T10:20:27.005733742Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 10:20:27.007832 containerd[1578]: time="2025-09-13T10:20:27.007805227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 10:20:27.008431 containerd[1578]: time="2025-09-13T10:20:27.008409591Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 502.210434ms" Sep 13 10:20:27.008469 containerd[1578]: time="2025-09-13T10:20:27.008437142Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 10:20:27.008990 containerd[1578]: time="2025-09-13T10:20:27.008967847Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 13 10:20:27.577089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount969614331.mount: Deactivated successfully. Sep 13 10:20:29.309822 containerd[1578]: time="2025-09-13T10:20:29.309747581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:20:29.310569 containerd[1578]: time="2025-09-13T10:20:29.310522284Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 13 10:20:29.311902 containerd[1578]: time="2025-09-13T10:20:29.311867947Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:20:29.314573 containerd[1578]: time="2025-09-13T10:20:29.314544877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:20:29.315649 containerd[1578]: time="2025-09-13T10:20:29.315598543Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.306594658s" Sep 13 10:20:29.315649 containerd[1578]: time="2025-09-13T10:20:29.315644740Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 13 10:20:31.292460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 10:20:31.294310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:20:31.727373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:20:31.739806 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 10:20:31.787425 kubelet[2256]: E0913 10:20:31.787352 2256 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 10:20:31.792115 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 10:20:31.792331 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 10:20:31.792726 systemd[1]: kubelet.service: Consumed 395ms CPU time, 108.7M memory peak. Sep 13 10:20:32.239951 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:20:32.240107 systemd[1]: kubelet.service: Consumed 395ms CPU time, 108.7M memory peak. Sep 13 10:20:32.242261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:20:32.266558 systemd[1]: Reload requested from client PID 2272 ('systemctl') (unit session-7.scope)... Sep 13 10:20:32.266575 systemd[1]: Reloading... Sep 13 10:20:32.395592 zram_generator::config[2321]: No configuration found. Sep 13 10:20:33.090152 systemd[1]: Reloading finished in 823 ms. Sep 13 10:20:33.146612 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 10:20:33.146737 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 10:20:33.147104 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:20:33.147164 systemd[1]: kubelet.service: Consumed 159ms CPU time, 98.2M memory peak. Sep 13 10:20:33.149225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:20:33.354897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:20:33.373790 (kubelet)[2363]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 10:20:33.442670 kubelet[2363]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:20:33.442670 kubelet[2363]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 10:20:33.442670 kubelet[2363]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:20:33.443111 kubelet[2363]: I0913 10:20:33.442721 2363 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 10:20:34.202991 kubelet[2363]: I0913 10:20:34.202915 2363 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 10:20:34.202991 kubelet[2363]: I0913 10:20:34.202967 2363 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 10:20:34.203266 kubelet[2363]: I0913 10:20:34.203241 2363 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 10:20:34.231201 kubelet[2363]: I0913 10:20:34.231127 2363 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 10:20:34.231365 kubelet[2363]: E0913 10:20:34.231237 2363 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 10:20:34.237793 kubelet[2363]: I0913 10:20:34.237771 2363 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 10:20:34.246083 kubelet[2363]: I0913 10:20:34.246053 2363 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 10:20:34.246379 kubelet[2363]: I0913 10:20:34.246324 2363 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 10:20:34.246583 kubelet[2363]: I0913 10:20:34.246364 2363 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 10:20:34.246828 kubelet[2363]: I0913 10:20:34.246590 2363 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 10:20:34.246828 kubelet[2363]: I0913 10:20:34.246601 2363 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 10:20:34.246828 kubelet[2363]: I0913 10:20:34.246788 2363 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:20:34.248578 kubelet[2363]: I0913 10:20:34.248556 2363 kubelet.go:480] "Attempting to sync node with API server" Sep 13 10:20:34.248644 kubelet[2363]: I0913 10:20:34.248593 2363 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 10:20:34.248644 kubelet[2363]: I0913 10:20:34.248633 2363 kubelet.go:386] "Adding apiserver pod source" Sep 13 10:20:34.249939 kubelet[2363]: I0913 10:20:34.249920 2363 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 10:20:34.299748 kubelet[2363]: I0913 10:20:34.299132 2363 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 13 10:20:34.299748 kubelet[2363]: I0913 10:20:34.299648 2363 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 10:20:34.301537 kubelet[2363]: W0913 10:20:34.301520 2363 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 10:20:34.305228 kubelet[2363]: I0913 10:20:34.304654 2363 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 10:20:34.305228 kubelet[2363]: I0913 10:20:34.304706 2363 server.go:1289] "Started kubelet" Sep 13 10:20:34.305910 kubelet[2363]: E0913 10:20:34.305815 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 10:20:34.305910 kubelet[2363]: E0913 10:20:34.305813 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 10:20:34.307158 kubelet[2363]: I0913 10:20:34.306771 2363 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 10:20:34.308822 kubelet[2363]: I0913 10:20:34.308786 2363 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 10:20:34.318926 kubelet[2363]: I0913 10:20:34.318893 2363 server.go:317] "Adding debug handlers to kubelet server" Sep 13 10:20:34.319872 kubelet[2363]: I0913 10:20:34.319835 2363 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 10:20:34.321638 kubelet[2363]: I0913 10:20:34.321612 2363 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 10:20:34.321864 kubelet[2363]: E0913 10:20:34.321827 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:20:34.324973 kubelet[2363]: I0913 10:20:34.324938 2363 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 10:20:34.325046 kubelet[2363]: I0913 10:20:34.325012 2363 reconciler.go:26] "Reconciler: start to sync state" Sep 13 10:20:34.345553 kubelet[2363]: E0913 10:20:34.345285 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="200ms" Sep 13 10:20:34.345553 kubelet[2363]: E0913 10:20:34.345404 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 10:20:34.346456 kubelet[2363]: I0913 10:20:34.346422 2363 factory.go:223] Registration of the systemd container factory successfully Sep 13 10:20:34.346605 kubelet[2363]: I0913 10:20:34.346573 2363 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 10:20:34.347755 kubelet[2363]: E0913 10:20:34.347734 2363 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 10:20:34.347852 kubelet[2363]: I0913 10:20:34.347828 2363 factory.go:223] Registration of the containerd container factory successfully Sep 13 10:20:35.281716 kubelet[2363]: I0913 10:20:35.280089 2363 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 10:20:35.287386 kubelet[2363]: E0913 10:20:35.284861 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 10:20:35.287386 kubelet[2363]: I0913 10:20:35.285516 2363 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 10:20:35.287386 kubelet[2363]: E0913 10:20:35.285807 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:20:35.287386 kubelet[2363]: E0913 10:20:35.286528 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="400ms" Sep 13 10:20:35.287386 kubelet[2363]: E0913 10:20:35.286903 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 10:20:35.289475 kubelet[2363]: I0913 10:20:35.289176 2363 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 10:20:35.289475 kubelet[2363]: I0913 10:20:35.289194 2363 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 10:20:35.289475 kubelet[2363]: I0913 10:20:35.289214 2363 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:20:35.289765 kubelet[2363]: E0913 10:20:35.288807 2363 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.73:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.73:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864d058c788f285 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 10:20:34.304676485 +0000 UTC m=+0.899353119,LastTimestamp:2025-09-13 10:20:34.304676485 +0000 UTC m=+0.899353119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 10:20:35.290378 kubelet[2363]: I0913 10:20:35.290349 2363 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 10:20:35.291882 kubelet[2363]: I0913 10:20:35.291855 2363 policy_none.go:49] "None policy: Start" Sep 13 10:20:35.291935 kubelet[2363]: I0913 10:20:35.291892 2363 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 10:20:35.291935 kubelet[2363]: I0913 10:20:35.291915 2363 state_mem.go:35] "Initializing new in-memory state store" Sep 13 10:20:35.292206 kubelet[2363]: I0913 10:20:35.292165 2363 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 10:20:35.292266 kubelet[2363]: I0913 10:20:35.292221 2363 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 10:20:35.292266 kubelet[2363]: I0913 10:20:35.292254 2363 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 10:20:35.292321 kubelet[2363]: I0913 10:20:35.292269 2363 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 10:20:35.292346 kubelet[2363]: E0913 10:20:35.292328 2363 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 10:20:35.294906 kubelet[2363]: E0913 10:20:35.294865 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 10:20:35.299483 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 10:20:35.312513 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 10:20:35.315536 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 10:20:35.325426 kubelet[2363]: E0913 10:20:35.325386 2363 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 10:20:35.325699 kubelet[2363]: I0913 10:20:35.325678 2363 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 10:20:35.325793 kubelet[2363]: I0913 10:20:35.325701 2363 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 10:20:35.326001 kubelet[2363]: I0913 10:20:35.325969 2363 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 10:20:35.327386 kubelet[2363]: E0913 10:20:35.327365 2363 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 10:20:35.327524 kubelet[2363]: E0913 10:20:35.327489 2363 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 10:20:35.401196 systemd[1]: Created slice kubepods-burstable-podf159b8508b705fc250ee4d617c2f063e.slice - libcontainer container kubepods-burstable-podf159b8508b705fc250ee4d617c2f063e.slice. Sep 13 10:20:35.419979 kubelet[2363]: E0913 10:20:35.419935 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:20:35.423354 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 13 10:20:35.425219 kubelet[2363]: E0913 10:20:35.425191 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:20:35.426832 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 13 10:20:35.427618 kubelet[2363]: I0913 10:20:35.426837 2363 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 10:20:35.427618 kubelet[2363]: E0913 10:20:35.427409 2363 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Sep 13 10:20:35.428402 kubelet[2363]: E0913 10:20:35.428363 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:20:35.486594 kubelet[2363]: I0913 10:20:35.486547 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f159b8508b705fc250ee4d617c2f063e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f159b8508b705fc250ee4d617c2f063e\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:20:35.486594 kubelet[2363]: I0913 10:20:35.486584 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f159b8508b705fc250ee4d617c2f063e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f159b8508b705fc250ee4d617c2f063e\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:20:35.486594 kubelet[2363]: I0913 10:20:35.486607 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f159b8508b705fc250ee4d617c2f063e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f159b8508b705fc250ee4d617c2f063e\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:20:35.486795 kubelet[2363]: I0913 10:20:35.486623 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:20:35.486795 kubelet[2363]: I0913 10:20:35.486673 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:20:35.486795 kubelet[2363]: I0913 10:20:35.486740 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:20:35.486795 kubelet[2363]: I0913 10:20:35.486774 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:20:35.486917 kubelet[2363]: I0913 10:20:35.486825 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 10:20:35.486917 kubelet[2363]: I0913 10:20:35.486847 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:20:35.584877 kubelet[2363]: E0913 10:20:35.584707 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 10:20:35.628406 kubelet[2363]: I0913 10:20:35.628359 2363 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 10:20:35.628744 kubelet[2363]: E0913 10:20:35.628704 2363 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Sep 13 10:20:35.687409 kubelet[2363]: E0913 10:20:35.687358 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="800ms" Sep 13 10:20:35.721400 containerd[1578]: time="2025-09-13T10:20:35.721349894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f159b8508b705fc250ee4d617c2f063e,Namespace:kube-system,Attempt:0,}" Sep 13 10:20:35.726166 containerd[1578]: time="2025-09-13T10:20:35.726117715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 13 10:20:35.729640 containerd[1578]: time="2025-09-13T10:20:35.729614282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 13 10:20:35.905463 containerd[1578]: time="2025-09-13T10:20:35.905295838Z" level=info msg="connecting to shim 729dd85bd3315df8e51dfe1c4c12a5dc20c222dfd18f6508244cd538ff2f1795" address="unix:///run/containerd/s/a1c8869159f319fce86a4419ce07a5ea7f774ba046217acfc43c142942b1401e" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:20:35.905897 containerd[1578]: time="2025-09-13T10:20:35.905854205Z" level=info msg="connecting to shim c22a6fc26e651f76678125fd3d6631d1923f2fa3c1a320a3360c0cf72d9b798c" address="unix:///run/containerd/s/12e204cbff7b5c450923a7c7b9d885873384c4d6e975c5fd6f39d5bac8ea1cc6" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:20:35.916790 containerd[1578]: time="2025-09-13T10:20:35.916735932Z" level=info msg="connecting to shim 8f69e4ae9b63d41224779b7b0fc6ebc4fd584675180d76b001c0aa65b1dfba23" address="unix:///run/containerd/s/cd51c9b0d36075603955bb598a55349f6ced826bc6ce78e7579317f3432586a6" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:20:35.944670 systemd[1]: Started cri-containerd-729dd85bd3315df8e51dfe1c4c12a5dc20c222dfd18f6508244cd538ff2f1795.scope - libcontainer container 729dd85bd3315df8e51dfe1c4c12a5dc20c222dfd18f6508244cd538ff2f1795. Sep 13 10:20:35.949186 systemd[1]: Started cri-containerd-c22a6fc26e651f76678125fd3d6631d1923f2fa3c1a320a3360c0cf72d9b798c.scope - libcontainer container c22a6fc26e651f76678125fd3d6631d1923f2fa3c1a320a3360c0cf72d9b798c. Sep 13 10:20:35.958430 systemd[1]: Started cri-containerd-8f69e4ae9b63d41224779b7b0fc6ebc4fd584675180d76b001c0aa65b1dfba23.scope - libcontainer container 8f69e4ae9b63d41224779b7b0fc6ebc4fd584675180d76b001c0aa65b1dfba23. Sep 13 10:20:36.032316 kubelet[2363]: I0913 10:20:36.031560 2363 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 10:20:36.032316 kubelet[2363]: E0913 10:20:36.031996 2363 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Sep 13 10:20:36.091231 containerd[1578]: time="2025-09-13T10:20:36.091176180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f159b8508b705fc250ee4d617c2f063e,Namespace:kube-system,Attempt:0,} returns sandbox id \"729dd85bd3315df8e51dfe1c4c12a5dc20c222dfd18f6508244cd538ff2f1795\"" Sep 13 10:20:36.095365 containerd[1578]: time="2025-09-13T10:20:36.095233569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c22a6fc26e651f76678125fd3d6631d1923f2fa3c1a320a3360c0cf72d9b798c\"" Sep 13 10:20:36.100769 containerd[1578]: time="2025-09-13T10:20:36.100715259Z" level=info msg="CreateContainer within sandbox \"729dd85bd3315df8e51dfe1c4c12a5dc20c222dfd18f6508244cd538ff2f1795\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 10:20:36.101123 containerd[1578]: time="2025-09-13T10:20:36.101097777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f69e4ae9b63d41224779b7b0fc6ebc4fd584675180d76b001c0aa65b1dfba23\"" Sep 13 10:20:36.103933 containerd[1578]: time="2025-09-13T10:20:36.103901304Z" level=info msg="CreateContainer within sandbox \"c22a6fc26e651f76678125fd3d6631d1923f2fa3c1a320a3360c0cf72d9b798c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 10:20:36.106516 containerd[1578]: time="2025-09-13T10:20:36.106268774Z" level=info msg="CreateContainer within sandbox \"8f69e4ae9b63d41224779b7b0fc6ebc4fd584675180d76b001c0aa65b1dfba23\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 10:20:36.113151 containerd[1578]: time="2025-09-13T10:20:36.113114743Z" level=info msg="Container a306461244155f1bbf1e2751311fa3d93b6a16fd654cc46652c2cc12b156c3f7: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:20:36.119331 containerd[1578]: time="2025-09-13T10:20:36.119280245Z" level=info msg="Container 6c8d949f1a22c96fd6e1204e2744855bb7613b829e446027c49f207983b44862: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:20:36.122538 containerd[1578]: time="2025-09-13T10:20:36.122457023Z" level=info msg="Container 8a9c92f2e7c5841c384568b5c62a87067c2b514abba807f8a2b42cab3ca5f495: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:20:36.126616 containerd[1578]: time="2025-09-13T10:20:36.126475469Z" level=info msg="CreateContainer within sandbox \"729dd85bd3315df8e51dfe1c4c12a5dc20c222dfd18f6508244cd538ff2f1795\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a306461244155f1bbf1e2751311fa3d93b6a16fd654cc46652c2cc12b156c3f7\"" Sep 13 10:20:36.127107 containerd[1578]: time="2025-09-13T10:20:36.127076546Z" level=info msg="StartContainer for \"a306461244155f1bbf1e2751311fa3d93b6a16fd654cc46652c2cc12b156c3f7\"" Sep 13 10:20:36.128516 containerd[1578]: time="2025-09-13T10:20:36.128473726Z" level=info msg="connecting to shim a306461244155f1bbf1e2751311fa3d93b6a16fd654cc46652c2cc12b156c3f7" address="unix:///run/containerd/s/a1c8869159f319fce86a4419ce07a5ea7f774ba046217acfc43c142942b1401e" protocol=ttrpc version=3 Sep 13 10:20:36.132135 containerd[1578]: time="2025-09-13T10:20:36.132097382Z" level=info msg="CreateContainer within sandbox \"8f69e4ae9b63d41224779b7b0fc6ebc4fd584675180d76b001c0aa65b1dfba23\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8a9c92f2e7c5841c384568b5c62a87067c2b514abba807f8a2b42cab3ca5f495\"" Sep 13 10:20:36.133525 containerd[1578]: time="2025-09-13T10:20:36.133099010Z" level=info msg="StartContainer for \"8a9c92f2e7c5841c384568b5c62a87067c2b514abba807f8a2b42cab3ca5f495\"" Sep 13 10:20:36.134205 containerd[1578]: time="2025-09-13T10:20:36.134184646Z" level=info msg="connecting to shim 8a9c92f2e7c5841c384568b5c62a87067c2b514abba807f8a2b42cab3ca5f495" address="unix:///run/containerd/s/cd51c9b0d36075603955bb598a55349f6ced826bc6ce78e7579317f3432586a6" protocol=ttrpc version=3 Sep 13 10:20:36.134374 containerd[1578]: time="2025-09-13T10:20:36.134204223Z" level=info msg="CreateContainer within sandbox \"c22a6fc26e651f76678125fd3d6631d1923f2fa3c1a320a3360c0cf72d9b798c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6c8d949f1a22c96fd6e1204e2744855bb7613b829e446027c49f207983b44862\"" Sep 13 10:20:36.135410 containerd[1578]: time="2025-09-13T10:20:36.135381250Z" level=info msg="StartContainer for \"6c8d949f1a22c96fd6e1204e2744855bb7613b829e446027c49f207983b44862\"" Sep 13 10:20:36.143222 containerd[1578]: time="2025-09-13T10:20:36.143186628Z" level=info msg="connecting to shim 6c8d949f1a22c96fd6e1204e2744855bb7613b829e446027c49f207983b44862" address="unix:///run/containerd/s/12e204cbff7b5c450923a7c7b9d885873384c4d6e975c5fd6f39d5bac8ea1cc6" protocol=ttrpc version=3 Sep 13 10:20:36.149640 systemd[1]: Started cri-containerd-a306461244155f1bbf1e2751311fa3d93b6a16fd654cc46652c2cc12b156c3f7.scope - libcontainer container a306461244155f1bbf1e2751311fa3d93b6a16fd654cc46652c2cc12b156c3f7. Sep 13 10:20:36.161640 systemd[1]: Started cri-containerd-8a9c92f2e7c5841c384568b5c62a87067c2b514abba807f8a2b42cab3ca5f495.scope - libcontainer container 8a9c92f2e7c5841c384568b5c62a87067c2b514abba807f8a2b42cab3ca5f495. Sep 13 10:20:36.172650 systemd[1]: Started cri-containerd-6c8d949f1a22c96fd6e1204e2744855bb7613b829e446027c49f207983b44862.scope - libcontainer container 6c8d949f1a22c96fd6e1204e2744855bb7613b829e446027c49f207983b44862. Sep 13 10:20:36.206891 containerd[1578]: time="2025-09-13T10:20:36.206335104Z" level=info msg="StartContainer for \"a306461244155f1bbf1e2751311fa3d93b6a16fd654cc46652c2cc12b156c3f7\" returns successfully" Sep 13 10:20:36.224636 containerd[1578]: time="2025-09-13T10:20:36.224594597Z" level=info msg="StartContainer for \"6c8d949f1a22c96fd6e1204e2744855bb7613b829e446027c49f207983b44862\" returns successfully" Sep 13 10:20:36.228334 containerd[1578]: time="2025-09-13T10:20:36.228308252Z" level=info msg="StartContainer for \"8a9c92f2e7c5841c384568b5c62a87067c2b514abba807f8a2b42cab3ca5f495\" returns successfully" Sep 13 10:20:36.301966 kubelet[2363]: E0913 10:20:36.301798 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:20:36.304690 kubelet[2363]: E0913 10:20:36.304581 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:20:36.307370 kubelet[2363]: E0913 10:20:36.307232 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:20:36.835988 kubelet[2363]: I0913 10:20:36.835851 2363 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 10:20:37.295301 kubelet[2363]: E0913 10:20:37.295222 2363 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 10:20:37.306880 kubelet[2363]: I0913 10:20:37.306848 2363 apiserver.go:52] "Watching apiserver" Sep 13 10:20:37.310430 kubelet[2363]: E0913 10:20:37.310410 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:20:37.311343 kubelet[2363]: E0913 10:20:37.311013 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:20:37.325669 kubelet[2363]: I0913 10:20:37.325652 2363 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 10:20:37.372545 kubelet[2363]: I0913 10:20:37.371888 2363 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 10:20:37.422842 kubelet[2363]: I0913 10:20:37.422790 2363 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:20:37.432518 kubelet[2363]: E0913 10:20:37.432463 2363 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:20:37.432898 kubelet[2363]: I0913 10:20:37.432710 2363 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 10:20:37.434472 kubelet[2363]: E0913 10:20:37.434455 2363 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 13 10:20:37.434643 kubelet[2363]: I0913 10:20:37.434540 2363 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 10:20:37.436122 kubelet[2363]: E0913 10:20:37.436084 2363 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 10:20:39.484316 systemd[1]: Reload requested from client PID 2651 ('systemctl') (unit session-7.scope)... Sep 13 10:20:39.484341 systemd[1]: Reloading... Sep 13 10:20:39.584535 zram_generator::config[2697]: No configuration found. Sep 13 10:20:39.587117 kubelet[2363]: I0913 10:20:39.587061 2363 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 10:20:39.814756 systemd[1]: Reloading finished in 329 ms. Sep 13 10:20:39.844574 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:20:39.866881 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 10:20:39.867217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:20:39.867274 systemd[1]: kubelet.service: Consumed 1.534s CPU time, 134.7M memory peak. Sep 13 10:20:39.869249 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:20:40.098550 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:20:40.103192 (kubelet)[2739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 10:20:40.151749 kubelet[2739]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:20:40.151749 kubelet[2739]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 10:20:40.151749 kubelet[2739]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:20:40.152313 kubelet[2739]: I0913 10:20:40.151789 2739 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 10:20:40.161236 kubelet[2739]: I0913 10:20:40.161191 2739 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 10:20:40.161236 kubelet[2739]: I0913 10:20:40.161224 2739 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 10:20:40.161790 kubelet[2739]: I0913 10:20:40.161770 2739 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 10:20:40.163030 kubelet[2739]: I0913 10:20:40.163010 2739 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 10:20:40.165309 kubelet[2739]: I0913 10:20:40.165275 2739 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 10:20:40.169969 kubelet[2739]: I0913 10:20:40.169948 2739 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 10:20:40.177108 kubelet[2739]: I0913 10:20:40.177075 2739 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 10:20:40.177340 kubelet[2739]: I0913 10:20:40.177301 2739 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 10:20:40.177468 kubelet[2739]: I0913 10:20:40.177326 2739 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 10:20:40.177578 kubelet[2739]: I0913 10:20:40.177477 2739 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 10:20:40.177578 kubelet[2739]: I0913 10:20:40.177486 2739 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 10:20:40.177633 kubelet[2739]: I0913 10:20:40.177576 2739 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:20:40.177784 kubelet[2739]: I0913 10:20:40.177766 2739 kubelet.go:480] "Attempting to sync node with API server" Sep 13 10:20:40.177784 kubelet[2739]: I0913 10:20:40.177782 2739 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 10:20:40.177834 kubelet[2739]: I0913 10:20:40.177810 2739 kubelet.go:386] "Adding apiserver pod source" Sep 13 10:20:40.177865 kubelet[2739]: I0913 10:20:40.177838 2739 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 10:20:40.178842 kubelet[2739]: I0913 10:20:40.178818 2739 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 13 10:20:40.181538 kubelet[2739]: I0913 10:20:40.181259 2739 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 10:20:40.187032 kubelet[2739]: I0913 10:20:40.186726 2739 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 10:20:40.187159 kubelet[2739]: I0913 10:20:40.187146 2739 server.go:1289] "Started kubelet" Sep 13 10:20:40.187597 kubelet[2739]: I0913 10:20:40.187558 2739 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 10:20:40.189516 kubelet[2739]: I0913 10:20:40.187526 2739 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 10:20:40.189516 kubelet[2739]: I0913 10:20:40.188842 2739 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 10:20:40.189516 kubelet[2739]: I0913 10:20:40.189065 2739 server.go:317] "Adding debug handlers to kubelet server" Sep 13 10:20:40.193674 kubelet[2739]: E0913 10:20:40.193624 2739 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 10:20:40.193982 kubelet[2739]: I0913 10:20:40.193968 2739 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 10:20:40.194551 kubelet[2739]: I0913 10:20:40.194521 2739 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 10:20:40.194960 kubelet[2739]: I0913 10:20:40.194943 2739 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 10:20:40.195816 kubelet[2739]: I0913 10:20:40.195792 2739 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 10:20:40.196071 kubelet[2739]: I0913 10:20:40.195973 2739 reconciler.go:26] "Reconciler: start to sync state" Sep 13 10:20:40.197395 kubelet[2739]: I0913 10:20:40.197318 2739 factory.go:223] Registration of the systemd container factory successfully Sep 13 10:20:40.197447 kubelet[2739]: I0913 10:20:40.197428 2739 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 10:20:40.201159 kubelet[2739]: I0913 10:20:40.201112 2739 factory.go:223] Registration of the containerd container factory successfully Sep 13 10:20:40.212669 kubelet[2739]: I0913 10:20:40.212633 2739 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 10:20:40.214461 kubelet[2739]: I0913 10:20:40.214423 2739 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 10:20:40.214850 kubelet[2739]: I0913 10:20:40.214552 2739 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 10:20:40.214850 kubelet[2739]: I0913 10:20:40.214579 2739 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 10:20:40.214850 kubelet[2739]: I0913 10:20:40.214587 2739 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 10:20:40.214850 kubelet[2739]: E0913 10:20:40.214648 2739 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 10:20:40.222816 sudo[2771]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 10:20:40.223211 sudo[2771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 13 10:20:40.247995 kubelet[2739]: I0913 10:20:40.247970 2739 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 10:20:40.248217 kubelet[2739]: I0913 10:20:40.248185 2739 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 10:20:40.248295 kubelet[2739]: I0913 10:20:40.248284 2739 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:20:40.248475 kubelet[2739]: I0913 10:20:40.248460 2739 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 10:20:40.248573 kubelet[2739]: I0913 10:20:40.248547 2739 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 10:20:40.248624 kubelet[2739]: I0913 10:20:40.248615 2739 policy_none.go:49] "None policy: Start" Sep 13 10:20:40.248676 kubelet[2739]: I0913 10:20:40.248667 2739 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 10:20:40.248768 kubelet[2739]: I0913 10:20:40.248727 2739 state_mem.go:35] "Initializing new in-memory state store" Sep 13 10:20:40.249601 kubelet[2739]: I0913 10:20:40.249586 2739 state_mem.go:75] "Updated machine memory state" Sep 13 10:20:40.254351 kubelet[2739]: E0913 10:20:40.254333 2739 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 10:20:40.254613 kubelet[2739]: I0913 10:20:40.254595 2739 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 10:20:40.254710 kubelet[2739]: I0913 10:20:40.254678 2739 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 10:20:40.255618 kubelet[2739]: I0913 10:20:40.255590 2739 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 10:20:40.261178 kubelet[2739]: E0913 10:20:40.261148 2739 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 10:20:40.316225 kubelet[2739]: I0913 10:20:40.316170 2739 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:20:40.316383 kubelet[2739]: I0913 10:20:40.316280 2739 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 10:20:40.318297 kubelet[2739]: I0913 10:20:40.316459 2739 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 10:20:40.324628 kubelet[2739]: E0913 10:20:40.324505 2739 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 10:20:40.384762 kubelet[2739]: I0913 10:20:40.384646 2739 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 10:20:40.395539 kubelet[2739]: I0913 10:20:40.395205 2739 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 13 10:20:40.395539 kubelet[2739]: I0913 10:20:40.395292 2739 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 10:20:40.498529 kubelet[2739]: I0913 10:20:40.497891 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:20:40.498529 kubelet[2739]: I0913 10:20:40.497939 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:20:40.498529 kubelet[2739]: I0913 10:20:40.497963 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 10:20:40.498529 kubelet[2739]: I0913 10:20:40.497982 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f159b8508b705fc250ee4d617c2f063e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f159b8508b705fc250ee4d617c2f063e\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:20:40.498529 kubelet[2739]: I0913 10:20:40.497997 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:20:40.498784 kubelet[2739]: I0913 10:20:40.498010 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:20:40.498784 kubelet[2739]: I0913 10:20:40.498024 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f159b8508b705fc250ee4d617c2f063e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f159b8508b705fc250ee4d617c2f063e\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:20:40.498784 kubelet[2739]: I0913 10:20:40.498036 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f159b8508b705fc250ee4d617c2f063e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f159b8508b705fc250ee4d617c2f063e\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:20:40.498784 kubelet[2739]: I0913 10:20:40.498053 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:20:40.588252 sudo[2771]: pam_unix(sudo:session): session closed for user root Sep 13 10:20:41.178943 kubelet[2739]: I0913 10:20:41.178883 2739 apiserver.go:52] "Watching apiserver" Sep 13 10:20:41.196101 kubelet[2739]: I0913 10:20:41.196071 2739 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 10:20:41.229150 kubelet[2739]: I0913 10:20:41.229114 2739 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 10:20:41.229330 kubelet[2739]: I0913 10:20:41.229305 2739 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 10:20:41.229434 kubelet[2739]: I0913 10:20:41.229416 2739 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:20:41.673359 kubelet[2739]: E0913 10:20:41.673303 2739 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 10:20:41.674802 kubelet[2739]: E0913 10:20:41.674519 2739 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 10:20:41.675732 kubelet[2739]: E0913 10:20:41.675693 2739 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:20:41.689965 kubelet[2739]: I0913 10:20:41.689885 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.689851711 podStartE2EDuration="1.689851711s" podCreationTimestamp="2025-09-13 10:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:20:41.677003812 +0000 UTC m=+1.566368162" watchObservedRunningTime="2025-09-13 10:20:41.689851711 +0000 UTC m=+1.579216061" Sep 13 10:20:41.690994 kubelet[2739]: I0913 10:20:41.690871 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.6908646579999997 podStartE2EDuration="2.690864658s" podCreationTimestamp="2025-09-13 10:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:20:41.690605801 +0000 UTC m=+1.579970151" watchObservedRunningTime="2025-09-13 10:20:41.690864658 +0000 UTC m=+1.580229008" Sep 13 10:20:41.703746 kubelet[2739]: I0913 10:20:41.703542 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.703525617 podStartE2EDuration="1.703525617s" podCreationTimestamp="2025-09-13 10:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:20:41.7033553 +0000 UTC m=+1.592719650" watchObservedRunningTime="2025-09-13 10:20:41.703525617 +0000 UTC m=+1.592889967" Sep 13 10:20:42.254753 sudo[1785]: pam_unix(sudo:session): session closed for user root Sep 13 10:20:42.257044 sshd[1784]: Connection closed by 10.0.0.1 port 35624 Sep 13 10:20:42.257668 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Sep 13 10:20:42.263198 systemd[1]: sshd@6-10.0.0.73:22-10.0.0.1:35624.service: Deactivated successfully. Sep 13 10:20:42.265835 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 10:20:42.266123 systemd[1]: session-7.scope: Consumed 5.716s CPU time, 256.4M memory peak. Sep 13 10:20:42.268834 systemd-logind[1558]: Session 7 logged out. Waiting for processes to exit. Sep 13 10:20:42.270475 systemd-logind[1558]: Removed session 7. Sep 13 10:20:45.331456 kubelet[2739]: I0913 10:20:45.331415 2739 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 10:20:45.332033 kubelet[2739]: I0913 10:20:45.331917 2739 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 10:20:45.332173 containerd[1578]: time="2025-09-13T10:20:45.331737547Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 10:20:46.545087 systemd[1]: Created slice kubepods-besteffort-pod05ce0d42_204f_46e6_bb4a_c31f55655d8d.slice - libcontainer container kubepods-besteffort-pod05ce0d42_204f_46e6_bb4a_c31f55655d8d.slice. Sep 13 10:20:46.585459 systemd[1]: Created slice kubepods-burstable-pod878a6ecc_c3dc_4e20_bc71_8036d0f91f72.slice - libcontainer container kubepods-burstable-pod878a6ecc_c3dc_4e20_bc71_8036d0f91f72.slice. Sep 13 10:20:46.638352 kubelet[2739]: I0913 10:20:46.638288 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-hostproc\") pod \"cilium-sdt4v\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " pod="kube-system/cilium-sdt4v" Sep 13 10:20:46.638352 kubelet[2739]: I0913 10:20:46.638333 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-etc-cni-netd\") pod \"cilium-sdt4v\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " pod="kube-system/cilium-sdt4v" Sep 13 10:20:46.638352 kubelet[2739]: I0913 10:20:46.638366 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-lib-modules\") pod \"cilium-sdt4v\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " pod="kube-system/cilium-sdt4v" Sep 13 10:20:46.638352 kubelet[2739]: I0913 10:20:46.638398 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-host-proc-sys-kernel\") pod \"cilium-sdt4v\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " pod="kube-system/cilium-sdt4v" Sep 13 10:20:46.638352 kubelet[2739]: I0913 10:20:46.638451 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05ce0d42-204f-46e6-bb4a-c31f55655d8d-xtables-lock\") pod \"kube-proxy-fdg8p\" (UID: \"05ce0d42-204f-46e6-bb4a-c31f55655d8d\") " pod="kube-system/kube-proxy-fdg8p" Sep 13 10:20:46.638352 kubelet[2739]: I0913 10:20:46.638481 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-cilium-run\") pod \"cilium-sdt4v\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " pod="kube-system/cilium-sdt4v" Sep 13 10:20:46.639144 kubelet[2739]: I0913 10:20:46.638518 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-cilium-cgroup\") pod \"cilium-sdt4v\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " pod="kube-system/cilium-sdt4v" Sep 13 10:20:46.639144 kubelet[2739]: I0913 10:20:46.638532 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-cni-path\") pod \"cilium-sdt4v\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " pod="kube-system/cilium-sdt4v" Sep 13 10:20:46.639144 kubelet[2739]: I0913 10:20:46.638552 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/05ce0d42-204f-46e6-bb4a-c31f55655d8d-kube-proxy\") pod \"kube-proxy-fdg8p\" (UID: \"05ce0d42-204f-46e6-bb4a-c31f55655d8d\") " pod="kube-system/kube-proxy-fdg8p" Sep 13 10:20:46.639144 kubelet[2739]: I0913 10:20:46.638566 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05ce0d42-204f-46e6-bb4a-c31f55655d8d-lib-modules\") pod \"kube-proxy-fdg8p\" (UID: \"05ce0d42-204f-46e6-bb4a-c31f55655d8d\") " pod="kube-system/kube-proxy-fdg8p" Sep 13 10:20:46.639144 kubelet[2739]: I0913 10:20:46.638579 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-hubble-tls\") pod \"cilium-sdt4v\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " pod="kube-system/cilium-sdt4v" Sep 13 10:20:46.639144 kubelet[2739]: I0913 10:20:46.638595 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlf7n\" (UniqueName: \"kubernetes.io/projected/05ce0d42-204f-46e6-bb4a-c31f55655d8d-kube-api-access-xlf7n\") pod \"kube-proxy-fdg8p\" (UID: \"05ce0d42-204f-46e6-bb4a-c31f55655d8d\") " pod="kube-system/kube-proxy-fdg8p" Sep 13 10:20:46.639278 kubelet[2739]: I0913 10:20:46.638611 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-xtables-lock\") pod \"cilium-sdt4v\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " pod="kube-system/cilium-sdt4v" Sep 13 10:20:46.639278 kubelet[2739]: I0913 10:20:46.638634 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-clustermesh-secrets\") pod \"cilium-sdt4v\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " pod="kube-system/cilium-sdt4v" Sep 13 10:20:46.639278 kubelet[2739]: I0913 10:20:46.638652 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-cilium-config-path\") pod \"cilium-sdt4v\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " pod="kube-system/cilium-sdt4v" Sep 13 10:20:46.639278 kubelet[2739]: I0913 10:20:46.638665 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-host-proc-sys-net\") pod \"cilium-sdt4v\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " pod="kube-system/cilium-sdt4v" Sep 13 10:20:46.639278 kubelet[2739]: I0913 10:20:46.638685 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46wjq\" (UniqueName: \"kubernetes.io/projected/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-kube-api-access-46wjq\") pod \"cilium-sdt4v\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " pod="kube-system/cilium-sdt4v" Sep 13 10:20:46.639385 kubelet[2739]: I0913 10:20:46.638699 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-bpf-maps\") pod \"cilium-sdt4v\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " pod="kube-system/cilium-sdt4v" Sep 13 10:20:46.783366 systemd[1]: Created slice kubepods-besteffort-podeac7faa3_dc29_4ce9_8fa7_7ac08c65a2bd.slice - libcontainer container kubepods-besteffort-podeac7faa3_dc29_4ce9_8fa7_7ac08c65a2bd.slice. Sep 13 10:20:46.841485 kubelet[2739]: I0913 10:20:46.841381 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn69f\" (UniqueName: \"kubernetes.io/projected/eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd-kube-api-access-kn69f\") pod \"cilium-operator-6c4d7847fc-4nq4c\" (UID: \"eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd\") " pod="kube-system/cilium-operator-6c4d7847fc-4nq4c" Sep 13 10:20:46.841485 kubelet[2739]: I0913 10:20:46.841417 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-4nq4c\" (UID: \"eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd\") " pod="kube-system/cilium-operator-6c4d7847fc-4nq4c" Sep 13 10:20:46.856222 containerd[1578]: time="2025-09-13T10:20:46.856183720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fdg8p,Uid:05ce0d42-204f-46e6-bb4a-c31f55655d8d,Namespace:kube-system,Attempt:0,}" Sep 13 10:20:46.891531 containerd[1578]: time="2025-09-13T10:20:46.891456740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sdt4v,Uid:878a6ecc-c3dc-4e20-bc71-8036d0f91f72,Namespace:kube-system,Attempt:0,}" Sep 13 10:20:46.909264 containerd[1578]: time="2025-09-13T10:20:46.909200779Z" level=info msg="connecting to shim c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120" address="unix:///run/containerd/s/fc3ecddf894f6dbcefcbcf62c873256aa9cc621de28951c180b993192ae779ec" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:20:46.949436 containerd[1578]: time="2025-09-13T10:20:46.949360580Z" level=info msg="connecting to shim ecec83359e4a95e0773325afb2dd8de8f4ad487961a2730b55cb990d9fd5c27b" address="unix:///run/containerd/s/ce84dd3159b9e502595f997dd873cb2f2dc1665273882b2bca8162564a1a798b" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:20:46.971769 systemd[1]: Started cri-containerd-c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120.scope - libcontainer container c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120. Sep 13 10:20:46.977297 systemd[1]: Started cri-containerd-ecec83359e4a95e0773325afb2dd8de8f4ad487961a2730b55cb990d9fd5c27b.scope - libcontainer container ecec83359e4a95e0773325afb2dd8de8f4ad487961a2730b55cb990d9fd5c27b. Sep 13 10:20:47.005342 containerd[1578]: time="2025-09-13T10:20:47.005291896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sdt4v,Uid:878a6ecc-c3dc-4e20-bc71-8036d0f91f72,Namespace:kube-system,Attempt:0,} returns sandbox id \"c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120\"" Sep 13 10:20:47.007668 containerd[1578]: time="2025-09-13T10:20:47.007638933Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 10:20:47.011360 containerd[1578]: time="2025-09-13T10:20:47.011326196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fdg8p,Uid:05ce0d42-204f-46e6-bb4a-c31f55655d8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecec83359e4a95e0773325afb2dd8de8f4ad487961a2730b55cb990d9fd5c27b\"" Sep 13 10:20:47.017348 containerd[1578]: time="2025-09-13T10:20:47.017319175Z" level=info msg="CreateContainer within sandbox \"ecec83359e4a95e0773325afb2dd8de8f4ad487961a2730b55cb990d9fd5c27b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 10:20:47.027744 containerd[1578]: time="2025-09-13T10:20:47.027708029Z" level=info msg="Container 6559c092fe6558ba3b45d55a22324d46fa57c1c0d3fc3f63f128d9457eecba6b: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:20:47.036181 containerd[1578]: time="2025-09-13T10:20:47.036136515Z" level=info msg="CreateContainer within sandbox \"ecec83359e4a95e0773325afb2dd8de8f4ad487961a2730b55cb990d9fd5c27b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6559c092fe6558ba3b45d55a22324d46fa57c1c0d3fc3f63f128d9457eecba6b\"" Sep 13 10:20:47.038562 containerd[1578]: time="2025-09-13T10:20:47.036635737Z" level=info msg="StartContainer for \"6559c092fe6558ba3b45d55a22324d46fa57c1c0d3fc3f63f128d9457eecba6b\"" Sep 13 10:20:47.038562 containerd[1578]: time="2025-09-13T10:20:47.037903735Z" level=info msg="connecting to shim 6559c092fe6558ba3b45d55a22324d46fa57c1c0d3fc3f63f128d9457eecba6b" address="unix:///run/containerd/s/ce84dd3159b9e502595f997dd873cb2f2dc1665273882b2bca8162564a1a798b" protocol=ttrpc version=3 Sep 13 10:20:47.071635 systemd[1]: Started cri-containerd-6559c092fe6558ba3b45d55a22324d46fa57c1c0d3fc3f63f128d9457eecba6b.scope - libcontainer container 6559c092fe6558ba3b45d55a22324d46fa57c1c0d3fc3f63f128d9457eecba6b. Sep 13 10:20:47.086290 containerd[1578]: time="2025-09-13T10:20:47.086249633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4nq4c,Uid:eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd,Namespace:kube-system,Attempt:0,}" Sep 13 10:20:47.106724 containerd[1578]: time="2025-09-13T10:20:47.106605888Z" level=info msg="connecting to shim 296dc02beeff01c541babc3b92db8b2274eff8c6d92707d59edf05989b7e6154" address="unix:///run/containerd/s/f099977db52a6bd9731b1d69e26dbf759fe127b47268432bb47dea18e17c1fec" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:20:47.120272 containerd[1578]: time="2025-09-13T10:20:47.120195938Z" level=info msg="StartContainer for \"6559c092fe6558ba3b45d55a22324d46fa57c1c0d3fc3f63f128d9457eecba6b\" returns successfully" Sep 13 10:20:47.135674 systemd[1]: Started cri-containerd-296dc02beeff01c541babc3b92db8b2274eff8c6d92707d59edf05989b7e6154.scope - libcontainer container 296dc02beeff01c541babc3b92db8b2274eff8c6d92707d59edf05989b7e6154. Sep 13 10:20:47.190674 containerd[1578]: time="2025-09-13T10:20:47.190618120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4nq4c,Uid:eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"296dc02beeff01c541babc3b92db8b2274eff8c6d92707d59edf05989b7e6154\"" Sep 13 10:20:48.116339 kubelet[2739]: I0913 10:20:48.116005 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fdg8p" podStartSLOduration=2.115980323 podStartE2EDuration="2.115980323s" podCreationTimestamp="2025-09-13 10:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:20:47.257637734 +0000 UTC m=+7.147002084" watchObservedRunningTime="2025-09-13 10:20:48.115980323 +0000 UTC m=+8.005344673" Sep 13 10:20:51.639171 update_engine[1566]: I20250913 10:20:51.639043 1566 update_attempter.cc:509] Updating boot flags... Sep 13 10:20:54.714305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1359731119.mount: Deactivated successfully. Sep 13 10:20:57.280186 containerd[1578]: time="2025-09-13T10:20:57.280119232Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:20:57.280722 containerd[1578]: time="2025-09-13T10:20:57.280682527Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 13 10:20:57.281862 containerd[1578]: time="2025-09-13T10:20:57.281828235Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:20:57.283351 containerd[1578]: time="2025-09-13T10:20:57.283319466Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.275644525s" Sep 13 10:20:57.283351 containerd[1578]: time="2025-09-13T10:20:57.283349193Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 10:20:57.284432 containerd[1578]: time="2025-09-13T10:20:57.284374292Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 10:20:57.287867 containerd[1578]: time="2025-09-13T10:20:57.287825463Z" level=info msg="CreateContainer within sandbox \"c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 10:20:57.296610 containerd[1578]: time="2025-09-13T10:20:57.296559007Z" level=info msg="Container 2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:20:57.306217 containerd[1578]: time="2025-09-13T10:20:57.306180912Z" level=info msg="CreateContainer within sandbox \"c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc\"" Sep 13 10:20:57.306770 containerd[1578]: time="2025-09-13T10:20:57.306727306Z" level=info msg="StartContainer for \"2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc\"" Sep 13 10:20:57.309210 containerd[1578]: time="2025-09-13T10:20:57.309153046Z" level=info msg="connecting to shim 2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc" address="unix:///run/containerd/s/fc3ecddf894f6dbcefcbcf62c873256aa9cc621de28951c180b993192ae779ec" protocol=ttrpc version=3 Sep 13 10:20:57.335669 systemd[1]: Started cri-containerd-2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc.scope - libcontainer container 2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc. Sep 13 10:20:57.375631 containerd[1578]: time="2025-09-13T10:20:57.375576184Z" level=info msg="StartContainer for \"2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc\" returns successfully" Sep 13 10:20:57.387946 systemd[1]: cri-containerd-2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc.scope: Deactivated successfully. Sep 13 10:20:57.389677 containerd[1578]: time="2025-09-13T10:20:57.389638211Z" level=info msg="received exit event container_id:\"2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc\" id:\"2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc\" pid:3186 exited_at:{seconds:1757758857 nanos:389142994}" Sep 13 10:20:57.389806 containerd[1578]: time="2025-09-13T10:20:57.389700358Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc\" id:\"2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc\" pid:3186 exited_at:{seconds:1757758857 nanos:389142994}" Sep 13 10:20:57.413755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc-rootfs.mount: Deactivated successfully. Sep 13 10:20:59.375892 containerd[1578]: time="2025-09-13T10:20:59.375829129Z" level=info msg="CreateContainer within sandbox \"c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 10:20:59.727380 containerd[1578]: time="2025-09-13T10:20:59.727333113Z" level=info msg="Container ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:21:00.008615 containerd[1578]: time="2025-09-13T10:21:00.008413546Z" level=info msg="CreateContainer within sandbox \"c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2\"" Sep 13 10:21:00.009422 containerd[1578]: time="2025-09-13T10:21:00.009361526Z" level=info msg="StartContainer for \"ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2\"" Sep 13 10:21:00.010421 containerd[1578]: time="2025-09-13T10:21:00.010389189Z" level=info msg="connecting to shim ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2" address="unix:///run/containerd/s/fc3ecddf894f6dbcefcbcf62c873256aa9cc621de28951c180b993192ae779ec" protocol=ttrpc version=3 Sep 13 10:21:00.040761 systemd[1]: Started cri-containerd-ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2.scope - libcontainer container ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2. Sep 13 10:21:00.217722 containerd[1578]: time="2025-09-13T10:21:00.217663328Z" level=info msg="StartContainer for \"ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2\" returns successfully" Sep 13 10:21:00.274781 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 10:21:00.276005 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:21:00.276306 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 13 10:21:00.279385 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 10:21:00.282804 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 10:21:00.283780 systemd[1]: cri-containerd-ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2.scope: Deactivated successfully. Sep 13 10:21:00.284067 containerd[1578]: time="2025-09-13T10:21:00.284027740Z" level=info msg="received exit event container_id:\"ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2\" id:\"ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2\" pid:3233 exited_at:{seconds:1757758860 nanos:283024806}" Sep 13 10:21:00.284481 containerd[1578]: time="2025-09-13T10:21:00.284451200Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2\" id:\"ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2\" pid:3233 exited_at:{seconds:1757758860 nanos:283024806}" Sep 13 10:21:00.324950 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:21:00.728324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2-rootfs.mount: Deactivated successfully. Sep 13 10:21:00.766866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2197918445.mount: Deactivated successfully. Sep 13 10:21:01.137308 containerd[1578]: time="2025-09-13T10:21:01.137152316Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:21:01.137981 containerd[1578]: time="2025-09-13T10:21:01.137950212Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 13 10:21:01.139069 containerd[1578]: time="2025-09-13T10:21:01.139018600Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:21:01.140707 containerd[1578]: time="2025-09-13T10:21:01.140670770Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.856239048s" Sep 13 10:21:01.140707 containerd[1578]: time="2025-09-13T10:21:01.140702409Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 10:21:01.146053 containerd[1578]: time="2025-09-13T10:21:01.145990063Z" level=info msg="CreateContainer within sandbox \"296dc02beeff01c541babc3b92db8b2274eff8c6d92707d59edf05989b7e6154\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 10:21:01.153999 containerd[1578]: time="2025-09-13T10:21:01.153961517Z" level=info msg="Container 3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:21:01.162788 containerd[1578]: time="2025-09-13T10:21:01.162739882Z" level=info msg="CreateContainer within sandbox \"296dc02beeff01c541babc3b92db8b2274eff8c6d92707d59edf05989b7e6154\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2\"" Sep 13 10:21:01.163434 containerd[1578]: time="2025-09-13T10:21:01.163258001Z" level=info msg="StartContainer for \"3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2\"" Sep 13 10:21:01.164250 containerd[1578]: time="2025-09-13T10:21:01.164221611Z" level=info msg="connecting to shim 3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2" address="unix:///run/containerd/s/f099977db52a6bd9731b1d69e26dbf759fe127b47268432bb47dea18e17c1fec" protocol=ttrpc version=3 Sep 13 10:21:01.190816 systemd[1]: Started cri-containerd-3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2.scope - libcontainer container 3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2. Sep 13 10:21:01.227365 containerd[1578]: time="2025-09-13T10:21:01.227311732Z" level=info msg="StartContainer for \"3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2\" returns successfully" Sep 13 10:21:01.291432 kubelet[2739]: I0913 10:21:01.291005 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-4nq4c" podStartSLOduration=1.342831224 podStartE2EDuration="15.290977972s" podCreationTimestamp="2025-09-13 10:20:46 +0000 UTC" firstStartedPulling="2025-09-13 10:20:47.193335333 +0000 UTC m=+7.082699673" lastFinishedPulling="2025-09-13 10:21:01.141482071 +0000 UTC m=+21.030846421" observedRunningTime="2025-09-13 10:21:01.290371807 +0000 UTC m=+21.179736157" watchObservedRunningTime="2025-09-13 10:21:01.290977972 +0000 UTC m=+21.180342322" Sep 13 10:21:01.294060 containerd[1578]: time="2025-09-13T10:21:01.293944543Z" level=info msg="CreateContainer within sandbox \"c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 10:21:01.318538 containerd[1578]: time="2025-09-13T10:21:01.318310362Z" level=info msg="Container d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:21:01.336400 containerd[1578]: time="2025-09-13T10:21:01.336327975Z" level=info msg="CreateContainer within sandbox \"c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4\"" Sep 13 10:21:01.338753 containerd[1578]: time="2025-09-13T10:21:01.338700345Z" level=info msg="StartContainer for \"d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4\"" Sep 13 10:21:01.345366 containerd[1578]: time="2025-09-13T10:21:01.345142199Z" level=info msg="connecting to shim d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4" address="unix:///run/containerd/s/fc3ecddf894f6dbcefcbcf62c873256aa9cc621de28951c180b993192ae779ec" protocol=ttrpc version=3 Sep 13 10:21:01.371858 systemd[1]: Started cri-containerd-d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4.scope - libcontainer container d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4. Sep 13 10:21:01.447459 systemd[1]: cri-containerd-d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4.scope: Deactivated successfully. Sep 13 10:21:01.449377 containerd[1578]: time="2025-09-13T10:21:01.449343129Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4\" id:\"d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4\" pid:3330 exited_at:{seconds:1757758861 nanos:449046509}" Sep 13 10:21:01.724620 containerd[1578]: time="2025-09-13T10:21:01.723744004Z" level=info msg="received exit event container_id:\"d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4\" id:\"d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4\" pid:3330 exited_at:{seconds:1757758861 nanos:449046509}" Sep 13 10:21:01.733927 containerd[1578]: time="2025-09-13T10:21:01.733874463Z" level=info msg="StartContainer for \"d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4\" returns successfully" Sep 13 10:21:01.764700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4-rootfs.mount: Deactivated successfully. Sep 13 10:21:02.298564 containerd[1578]: time="2025-09-13T10:21:02.298475919Z" level=info msg="CreateContainer within sandbox \"c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 10:21:02.312810 containerd[1578]: time="2025-09-13T10:21:02.312732678Z" level=info msg="Container 64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:21:02.329659 containerd[1578]: time="2025-09-13T10:21:02.329598942Z" level=info msg="CreateContainer within sandbox \"c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b\"" Sep 13 10:21:02.330236 containerd[1578]: time="2025-09-13T10:21:02.330164099Z" level=info msg="StartContainer for \"64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b\"" Sep 13 10:21:02.331301 containerd[1578]: time="2025-09-13T10:21:02.331260359Z" level=info msg="connecting to shim 64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b" address="unix:///run/containerd/s/fc3ecddf894f6dbcefcbcf62c873256aa9cc621de28951c180b993192ae779ec" protocol=ttrpc version=3 Sep 13 10:21:02.355634 systemd[1]: Started cri-containerd-64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b.scope - libcontainer container 64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b. Sep 13 10:21:02.387296 systemd[1]: cri-containerd-64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b.scope: Deactivated successfully. Sep 13 10:21:02.388061 containerd[1578]: time="2025-09-13T10:21:02.388021113Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b\" id:\"64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b\" pid:3370 exited_at:{seconds:1757758862 nanos:387701230}" Sep 13 10:21:02.389907 containerd[1578]: time="2025-09-13T10:21:02.389858341Z" level=info msg="received exit event container_id:\"64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b\" id:\"64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b\" pid:3370 exited_at:{seconds:1757758862 nanos:387701230}" Sep 13 10:21:02.399089 containerd[1578]: time="2025-09-13T10:21:02.399050470Z" level=info msg="StartContainer for \"64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b\" returns successfully" Sep 13 10:21:02.728346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b-rootfs.mount: Deactivated successfully. Sep 13 10:21:03.307157 containerd[1578]: time="2025-09-13T10:21:03.307107742Z" level=info msg="CreateContainer within sandbox \"c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 10:21:03.320799 containerd[1578]: time="2025-09-13T10:21:03.320738029Z" level=info msg="Container 4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:21:03.324361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2977025475.mount: Deactivated successfully. Sep 13 10:21:03.328924 containerd[1578]: time="2025-09-13T10:21:03.328886693Z" level=info msg="CreateContainer within sandbox \"c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e\"" Sep 13 10:21:03.329527 containerd[1578]: time="2025-09-13T10:21:03.329438003Z" level=info msg="StartContainer for \"4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e\"" Sep 13 10:21:03.330311 containerd[1578]: time="2025-09-13T10:21:03.330289310Z" level=info msg="connecting to shim 4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e" address="unix:///run/containerd/s/fc3ecddf894f6dbcefcbcf62c873256aa9cc621de28951c180b993192ae779ec" protocol=ttrpc version=3 Sep 13 10:21:03.356642 systemd[1]: Started cri-containerd-4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e.scope - libcontainer container 4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e. Sep 13 10:21:03.394978 containerd[1578]: time="2025-09-13T10:21:03.394849946Z" level=info msg="StartContainer for \"4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e\" returns successfully" Sep 13 10:21:03.516108 containerd[1578]: time="2025-09-13T10:21:03.516054315Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e\" id:\"118a51be73068240dd0647e2b5243de4d3ece159f78127638290f5e561397564\" pid:3442 exited_at:{seconds:1757758863 nanos:515540917}" Sep 13 10:21:03.597861 kubelet[2739]: I0913 10:21:03.597427 2739 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 10:21:03.672845 systemd[1]: Created slice kubepods-burstable-pod514faa4b_5f67_4967_8e21_61f98aa76dd3.slice - libcontainer container kubepods-burstable-pod514faa4b_5f67_4967_8e21_61f98aa76dd3.slice. Sep 13 10:21:03.678600 systemd[1]: Created slice kubepods-burstable-podb6274362_bdc1_49cc_9566_af93d7617aa7.slice - libcontainer container kubepods-burstable-podb6274362_bdc1_49cc_9566_af93d7617aa7.slice. Sep 13 10:21:03.754638 kubelet[2739]: I0913 10:21:03.754567 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqp7m\" (UniqueName: \"kubernetes.io/projected/514faa4b-5f67-4967-8e21-61f98aa76dd3-kube-api-access-dqp7m\") pod \"coredns-674b8bbfcf-zq2j9\" (UID: \"514faa4b-5f67-4967-8e21-61f98aa76dd3\") " pod="kube-system/coredns-674b8bbfcf-zq2j9" Sep 13 10:21:03.754638 kubelet[2739]: I0913 10:21:03.754636 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/514faa4b-5f67-4967-8e21-61f98aa76dd3-config-volume\") pod \"coredns-674b8bbfcf-zq2j9\" (UID: \"514faa4b-5f67-4967-8e21-61f98aa76dd3\") " pod="kube-system/coredns-674b8bbfcf-zq2j9" Sep 13 10:21:03.754863 kubelet[2739]: I0913 10:21:03.754667 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m245m\" (UniqueName: \"kubernetes.io/projected/b6274362-bdc1-49cc-9566-af93d7617aa7-kube-api-access-m245m\") pod \"coredns-674b8bbfcf-xdcl6\" (UID: \"b6274362-bdc1-49cc-9566-af93d7617aa7\") " pod="kube-system/coredns-674b8bbfcf-xdcl6" Sep 13 10:21:03.754863 kubelet[2739]: I0913 10:21:03.754700 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6274362-bdc1-49cc-9566-af93d7617aa7-config-volume\") pod \"coredns-674b8bbfcf-xdcl6\" (UID: \"b6274362-bdc1-49cc-9566-af93d7617aa7\") " pod="kube-system/coredns-674b8bbfcf-xdcl6" Sep 13 10:21:03.983036 containerd[1578]: time="2025-09-13T10:21:03.982990100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zq2j9,Uid:514faa4b-5f67-4967-8e21-61f98aa76dd3,Namespace:kube-system,Attempt:0,}" Sep 13 10:21:03.983570 containerd[1578]: time="2025-09-13T10:21:03.983302208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xdcl6,Uid:b6274362-bdc1-49cc-9566-af93d7617aa7,Namespace:kube-system,Attempt:0,}" Sep 13 10:21:04.326939 kubelet[2739]: I0913 10:21:04.326763 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sdt4v" podStartSLOduration=8.049651202 podStartE2EDuration="18.3267416s" podCreationTimestamp="2025-09-13 10:20:46 +0000 UTC" firstStartedPulling="2025-09-13 10:20:47.007140793 +0000 UTC m=+6.896505133" lastFinishedPulling="2025-09-13 10:20:57.284231181 +0000 UTC m=+17.173595531" observedRunningTime="2025-09-13 10:21:04.32574546 +0000 UTC m=+24.215109810" watchObservedRunningTime="2025-09-13 10:21:04.3267416 +0000 UTC m=+24.216105950" Sep 13 10:21:05.718597 systemd-networkd[1490]: cilium_host: Link UP Sep 13 10:21:05.718829 systemd-networkd[1490]: cilium_net: Link UP Sep 13 10:21:05.719077 systemd-networkd[1490]: cilium_net: Gained carrier Sep 13 10:21:05.719330 systemd-networkd[1490]: cilium_host: Gained carrier Sep 13 10:21:05.824600 systemd-networkd[1490]: cilium_vxlan: Link UP Sep 13 10:21:05.824618 systemd-networkd[1490]: cilium_vxlan: Gained carrier Sep 13 10:21:06.034542 kernel: NET: Registered PF_ALG protocol family Sep 13 10:21:06.069709 systemd-networkd[1490]: cilium_host: Gained IPv6LL Sep 13 10:21:06.404761 systemd-networkd[1490]: cilium_net: Gained IPv6LL Sep 13 10:21:06.688976 systemd-networkd[1490]: lxc_health: Link UP Sep 13 10:21:06.695982 systemd-networkd[1490]: lxc_health: Gained carrier Sep 13 10:21:07.024060 systemd-networkd[1490]: lxc9e6aa9354e63: Link UP Sep 13 10:21:07.032548 kernel: eth0: renamed from tmpe6d52 Sep 13 10:21:07.035564 systemd-networkd[1490]: lxc9e6aa9354e63: Gained carrier Sep 13 10:21:07.054350 kernel: eth0: renamed from tmp41c1a Sep 13 10:21:07.053309 systemd-networkd[1490]: lxc83872c3098e6: Link UP Sep 13 10:21:07.057966 systemd-networkd[1490]: lxc83872c3098e6: Gained carrier Sep 13 10:21:07.172910 systemd-networkd[1490]: cilium_vxlan: Gained IPv6LL Sep 13 10:21:08.327622 systemd-networkd[1490]: lxc83872c3098e6: Gained IPv6LL Sep 13 10:21:08.328018 systemd-networkd[1490]: lxc_health: Gained IPv6LL Sep 13 10:21:08.772775 systemd-networkd[1490]: lxc9e6aa9354e63: Gained IPv6LL Sep 13 10:21:10.452602 containerd[1578]: time="2025-09-13T10:21:10.452519455Z" level=info msg="connecting to shim e6d522296dc21c681be505f52a67e6e5ff10795b1549687aaa07bb50f8acce6a" address="unix:///run/containerd/s/e22cf94b84ed177272103a102b537f66f9a45b5602c8ec4bf8fae3597bf77575" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:21:10.467635 containerd[1578]: time="2025-09-13T10:21:10.467569288Z" level=info msg="connecting to shim 41c1a7e83f6af4fdfb8f6c1ce3fa0ce42d8af7454c18c40a133863fdfc67f470" address="unix:///run/containerd/s/a7c9e023c71630de95c8c33d45062e8f7705fc602b683aef1a8bf16a52607508" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:21:10.485684 systemd[1]: Started cri-containerd-e6d522296dc21c681be505f52a67e6e5ff10795b1549687aaa07bb50f8acce6a.scope - libcontainer container e6d522296dc21c681be505f52a67e6e5ff10795b1549687aaa07bb50f8acce6a. Sep 13 10:21:10.493511 systemd[1]: Started cri-containerd-41c1a7e83f6af4fdfb8f6c1ce3fa0ce42d8af7454c18c40a133863fdfc67f470.scope - libcontainer container 41c1a7e83f6af4fdfb8f6c1ce3fa0ce42d8af7454c18c40a133863fdfc67f470. Sep 13 10:21:10.499990 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:21:10.509295 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:21:10.534072 containerd[1578]: time="2025-09-13T10:21:10.534029855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xdcl6,Uid:b6274362-bdc1-49cc-9566-af93d7617aa7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6d522296dc21c681be505f52a67e6e5ff10795b1549687aaa07bb50f8acce6a\"" Sep 13 10:21:10.548249 containerd[1578]: time="2025-09-13T10:21:10.547342026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zq2j9,Uid:514faa4b-5f67-4967-8e21-61f98aa76dd3,Namespace:kube-system,Attempt:0,} returns sandbox id \"41c1a7e83f6af4fdfb8f6c1ce3fa0ce42d8af7454c18c40a133863fdfc67f470\"" Sep 13 10:21:10.557076 containerd[1578]: time="2025-09-13T10:21:10.557014957Z" level=info msg="CreateContainer within sandbox \"41c1a7e83f6af4fdfb8f6c1ce3fa0ce42d8af7454c18c40a133863fdfc67f470\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 10:21:10.560202 containerd[1578]: time="2025-09-13T10:21:10.560161039Z" level=info msg="CreateContainer within sandbox \"e6d522296dc21c681be505f52a67e6e5ff10795b1549687aaa07bb50f8acce6a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 10:21:10.567532 containerd[1578]: time="2025-09-13T10:21:10.567459279Z" level=info msg="Container 3d0b62e48c4b1d0c8e09592d44040dc31052af4364d4f1b9888ce403b13383fa: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:21:10.581808 containerd[1578]: time="2025-09-13T10:21:10.581751636Z" level=info msg="CreateContainer within sandbox \"41c1a7e83f6af4fdfb8f6c1ce3fa0ce42d8af7454c18c40a133863fdfc67f470\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3d0b62e48c4b1d0c8e09592d44040dc31052af4364d4f1b9888ce403b13383fa\"" Sep 13 10:21:10.582346 containerd[1578]: time="2025-09-13T10:21:10.582319074Z" level=info msg="StartContainer for \"3d0b62e48c4b1d0c8e09592d44040dc31052af4364d4f1b9888ce403b13383fa\"" Sep 13 10:21:10.583213 containerd[1578]: time="2025-09-13T10:21:10.583141643Z" level=info msg="connecting to shim 3d0b62e48c4b1d0c8e09592d44040dc31052af4364d4f1b9888ce403b13383fa" address="unix:///run/containerd/s/a7c9e023c71630de95c8c33d45062e8f7705fc602b683aef1a8bf16a52607508" protocol=ttrpc version=3 Sep 13 10:21:10.591385 containerd[1578]: time="2025-09-13T10:21:10.591336189Z" level=info msg="Container ff7c9ec27fbde4cee36f40585c7ce4a8381ac3160574803353841ed45f78cfe3: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:21:10.597940 containerd[1578]: time="2025-09-13T10:21:10.597883927Z" level=info msg="CreateContainer within sandbox \"e6d522296dc21c681be505f52a67e6e5ff10795b1549687aaa07bb50f8acce6a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ff7c9ec27fbde4cee36f40585c7ce4a8381ac3160574803353841ed45f78cfe3\"" Sep 13 10:21:10.598793 containerd[1578]: time="2025-09-13T10:21:10.598770235Z" level=info msg="StartContainer for \"ff7c9ec27fbde4cee36f40585c7ce4a8381ac3160574803353841ed45f78cfe3\"" Sep 13 10:21:10.600200 containerd[1578]: time="2025-09-13T10:21:10.600157616Z" level=info msg="connecting to shim ff7c9ec27fbde4cee36f40585c7ce4a8381ac3160574803353841ed45f78cfe3" address="unix:///run/containerd/s/e22cf94b84ed177272103a102b537f66f9a45b5602c8ec4bf8fae3597bf77575" protocol=ttrpc version=3 Sep 13 10:21:10.616654 systemd[1]: Started cri-containerd-3d0b62e48c4b1d0c8e09592d44040dc31052af4364d4f1b9888ce403b13383fa.scope - libcontainer container 3d0b62e48c4b1d0c8e09592d44040dc31052af4364d4f1b9888ce403b13383fa. Sep 13 10:21:10.619086 systemd[1]: Started sshd@7-10.0.0.73:22-10.0.0.1:40300.service - OpenSSH per-connection server daemon (10.0.0.1:40300). Sep 13 10:21:10.629679 systemd[1]: Started cri-containerd-ff7c9ec27fbde4cee36f40585c7ce4a8381ac3160574803353841ed45f78cfe3.scope - libcontainer container ff7c9ec27fbde4cee36f40585c7ce4a8381ac3160574803353841ed45f78cfe3. Sep 13 10:21:10.674943 containerd[1578]: time="2025-09-13T10:21:10.674882703Z" level=info msg="StartContainer for \"ff7c9ec27fbde4cee36f40585c7ce4a8381ac3160574803353841ed45f78cfe3\" returns successfully" Sep 13 10:21:10.675413 containerd[1578]: time="2025-09-13T10:21:10.675375160Z" level=info msg="StartContainer for \"3d0b62e48c4b1d0c8e09592d44040dc31052af4364d4f1b9888ce403b13383fa\" returns successfully" Sep 13 10:21:10.687706 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 40300 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:21:10.689362 sshd-session[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:21:10.695036 systemd-logind[1558]: New session 8 of user core. Sep 13 10:21:10.715784 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 10:21:10.932990 sshd[4079]: Connection closed by 10.0.0.1 port 40300 Sep 13 10:21:10.933404 sshd-session[4040]: pam_unix(sshd:session): session closed for user core Sep 13 10:21:10.937836 systemd[1]: sshd@7-10.0.0.73:22-10.0.0.1:40300.service: Deactivated successfully. Sep 13 10:21:10.940032 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 10:21:10.940889 systemd-logind[1558]: Session 8 logged out. Waiting for processes to exit. Sep 13 10:21:10.942343 systemd-logind[1558]: Removed session 8. Sep 13 10:21:11.346004 kubelet[2739]: I0913 10:21:11.345926 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zq2j9" podStartSLOduration=25.3459083 podStartE2EDuration="25.3459083s" podCreationTimestamp="2025-09-13 10:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:21:11.345225806 +0000 UTC m=+31.234590157" watchObservedRunningTime="2025-09-13 10:21:11.3459083 +0000 UTC m=+31.235272650" Sep 13 10:21:11.370363 kubelet[2739]: I0913 10:21:11.370275 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xdcl6" podStartSLOduration=25.370255325 podStartE2EDuration="25.370255325s" podCreationTimestamp="2025-09-13 10:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:21:11.370008911 +0000 UTC m=+31.259373271" watchObservedRunningTime="2025-09-13 10:21:11.370255325 +0000 UTC m=+31.259619696" Sep 13 10:21:11.445367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2673356287.mount: Deactivated successfully. Sep 13 10:21:15.950273 systemd[1]: Started sshd@8-10.0.0.73:22-10.0.0.1:40308.service - OpenSSH per-connection server daemon (10.0.0.1:40308). Sep 13 10:21:16.019125 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 40308 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:21:16.021162 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:21:16.025824 systemd-logind[1558]: New session 9 of user core. Sep 13 10:21:16.041656 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 10:21:16.168016 sshd[4113]: Connection closed by 10.0.0.1 port 40308 Sep 13 10:21:16.168376 sshd-session[4110]: pam_unix(sshd:session): session closed for user core Sep 13 10:21:16.173203 systemd[1]: sshd@8-10.0.0.73:22-10.0.0.1:40308.service: Deactivated successfully. Sep 13 10:21:16.175449 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 10:21:16.176474 systemd-logind[1558]: Session 9 logged out. Waiting for processes to exit. Sep 13 10:21:16.178171 systemd-logind[1558]: Removed session 9. Sep 13 10:21:21.182550 systemd[1]: Started sshd@9-10.0.0.73:22-10.0.0.1:33152.service - OpenSSH per-connection server daemon (10.0.0.1:33152). Sep 13 10:21:21.232942 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 33152 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:21:21.234760 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:21:21.239652 systemd-logind[1558]: New session 10 of user core. Sep 13 10:21:21.248740 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 10:21:21.429462 sshd[4136]: Connection closed by 10.0.0.1 port 33152 Sep 13 10:21:21.429941 sshd-session[4133]: pam_unix(sshd:session): session closed for user core Sep 13 10:21:21.434648 systemd[1]: sshd@9-10.0.0.73:22-10.0.0.1:33152.service: Deactivated successfully. Sep 13 10:21:21.437388 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 10:21:21.438360 systemd-logind[1558]: Session 10 logged out. Waiting for processes to exit. Sep 13 10:21:21.440234 systemd-logind[1558]: Removed session 10. Sep 13 10:21:26.446454 systemd[1]: Started sshd@10-10.0.0.73:22-10.0.0.1:33156.service - OpenSSH per-connection server daemon (10.0.0.1:33156). Sep 13 10:21:26.506892 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 33156 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:21:26.508363 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:21:26.512954 systemd-logind[1558]: New session 11 of user core. Sep 13 10:21:26.522638 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 10:21:26.646632 sshd[4153]: Connection closed by 10.0.0.1 port 33156 Sep 13 10:21:26.647022 sshd-session[4150]: pam_unix(sshd:session): session closed for user core Sep 13 10:21:26.659250 systemd[1]: sshd@10-10.0.0.73:22-10.0.0.1:33156.service: Deactivated successfully. Sep 13 10:21:26.661156 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 10:21:26.661893 systemd-logind[1558]: Session 11 logged out. Waiting for processes to exit. Sep 13 10:21:26.664548 systemd[1]: Started sshd@11-10.0.0.73:22-10.0.0.1:33166.service - OpenSSH per-connection server daemon (10.0.0.1:33166). Sep 13 10:21:26.665228 systemd-logind[1558]: Removed session 11. Sep 13 10:21:26.717351 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 33166 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:21:26.719073 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:21:26.723819 systemd-logind[1558]: New session 12 of user core. Sep 13 10:21:26.734631 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 10:21:26.951593 sshd[4171]: Connection closed by 10.0.0.1 port 33166 Sep 13 10:21:26.951922 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Sep 13 10:21:26.964161 systemd[1]: sshd@11-10.0.0.73:22-10.0.0.1:33166.service: Deactivated successfully. Sep 13 10:21:26.967931 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 10:21:26.970702 systemd-logind[1558]: Session 12 logged out. Waiting for processes to exit. Sep 13 10:21:26.975741 systemd[1]: Started sshd@12-10.0.0.73:22-10.0.0.1:33172.service - OpenSSH per-connection server daemon (10.0.0.1:33172). Sep 13 10:21:26.976668 systemd-logind[1558]: Removed session 12. Sep 13 10:21:27.029842 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 33172 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:21:27.031981 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:21:27.036682 systemd-logind[1558]: New session 13 of user core. Sep 13 10:21:27.045620 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 10:21:27.376273 sshd[4185]: Connection closed by 10.0.0.1 port 33172 Sep 13 10:21:27.376596 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Sep 13 10:21:27.380629 systemd[1]: sshd@12-10.0.0.73:22-10.0.0.1:33172.service: Deactivated successfully. Sep 13 10:21:27.382787 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 10:21:27.384370 systemd-logind[1558]: Session 13 logged out. Waiting for processes to exit. Sep 13 10:21:27.385639 systemd-logind[1558]: Removed session 13. Sep 13 10:21:32.392248 systemd[1]: Started sshd@13-10.0.0.73:22-10.0.0.1:58308.service - OpenSSH per-connection server daemon (10.0.0.1:58308). Sep 13 10:21:32.452983 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 58308 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:21:32.454895 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:21:32.459877 systemd-logind[1558]: New session 14 of user core. Sep 13 10:21:32.471632 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 10:21:32.579483 sshd[4202]: Connection closed by 10.0.0.1 port 58308 Sep 13 10:21:32.579965 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Sep 13 10:21:32.585643 systemd[1]: sshd@13-10.0.0.73:22-10.0.0.1:58308.service: Deactivated successfully. Sep 13 10:21:32.588222 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 10:21:32.589127 systemd-logind[1558]: Session 14 logged out. Waiting for processes to exit. Sep 13 10:21:32.590608 systemd-logind[1558]: Removed session 14. Sep 13 10:21:37.599183 systemd[1]: Started sshd@14-10.0.0.73:22-10.0.0.1:58324.service - OpenSSH per-connection server daemon (10.0.0.1:58324). Sep 13 10:21:37.658578 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 58324 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:21:37.659823 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:21:37.663932 systemd-logind[1558]: New session 15 of user core. Sep 13 10:21:37.675606 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 10:21:37.775944 sshd[4218]: Connection closed by 10.0.0.1 port 58324 Sep 13 10:21:37.776366 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Sep 13 10:21:37.792066 systemd[1]: sshd@14-10.0.0.73:22-10.0.0.1:58324.service: Deactivated successfully. Sep 13 10:21:37.793972 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 10:21:37.794722 systemd-logind[1558]: Session 15 logged out. Waiting for processes to exit. Sep 13 10:21:37.797738 systemd[1]: Started sshd@15-10.0.0.73:22-10.0.0.1:58338.service - OpenSSH per-connection server daemon (10.0.0.1:58338). Sep 13 10:21:37.798399 systemd-logind[1558]: Removed session 15. Sep 13 10:21:37.848193 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 58338 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:21:37.849585 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:21:37.854106 systemd-logind[1558]: New session 16 of user core. Sep 13 10:21:37.864629 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 10:21:38.111014 sshd[4234]: Connection closed by 10.0.0.1 port 58338 Sep 13 10:21:38.111555 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Sep 13 10:21:38.125398 systemd[1]: sshd@15-10.0.0.73:22-10.0.0.1:58338.service: Deactivated successfully. Sep 13 10:21:38.127230 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 10:21:38.128051 systemd-logind[1558]: Session 16 logged out. Waiting for processes to exit. Sep 13 10:21:38.130570 systemd[1]: Started sshd@16-10.0.0.73:22-10.0.0.1:58354.service - OpenSSH per-connection server daemon (10.0.0.1:58354). Sep 13 10:21:38.131208 systemd-logind[1558]: Removed session 16. Sep 13 10:21:38.182703 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 58354 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:21:38.184091 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:21:38.188271 systemd-logind[1558]: New session 17 of user core. Sep 13 10:21:38.197616 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 10:21:38.761323 sshd[4249]: Connection closed by 10.0.0.1 port 58354 Sep 13 10:21:38.760749 sshd-session[4246]: pam_unix(sshd:session): session closed for user core Sep 13 10:21:38.773104 systemd[1]: sshd@16-10.0.0.73:22-10.0.0.1:58354.service: Deactivated successfully. Sep 13 10:21:38.778239 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 10:21:38.780349 systemd-logind[1558]: Session 17 logged out. Waiting for processes to exit. Sep 13 10:21:38.785029 systemd[1]: Started sshd@17-10.0.0.73:22-10.0.0.1:58360.service - OpenSSH per-connection server daemon (10.0.0.1:58360). Sep 13 10:21:38.785988 systemd-logind[1558]: Removed session 17. Sep 13 10:21:38.834002 sshd[4268]: Accepted publickey for core from 10.0.0.1 port 58360 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:21:38.835332 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:21:38.840072 systemd-logind[1558]: New session 18 of user core. Sep 13 10:21:38.849649 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 10:21:39.079609 sshd[4271]: Connection closed by 10.0.0.1 port 58360 Sep 13 10:21:39.080142 sshd-session[4268]: pam_unix(sshd:session): session closed for user core Sep 13 10:21:39.089300 systemd[1]: sshd@17-10.0.0.73:22-10.0.0.1:58360.service: Deactivated successfully. Sep 13 10:21:39.091423 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 10:21:39.093398 systemd-logind[1558]: Session 18 logged out. Waiting for processes to exit. Sep 13 10:21:39.098042 systemd[1]: Started sshd@18-10.0.0.73:22-10.0.0.1:58368.service - OpenSSH per-connection server daemon (10.0.0.1:58368). Sep 13 10:21:39.098720 systemd-logind[1558]: Removed session 18. Sep 13 10:21:39.157635 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 58368 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:21:39.158905 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:21:39.163191 systemd-logind[1558]: New session 19 of user core. Sep 13 10:21:39.175620 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 10:21:39.286975 sshd[4285]: Connection closed by 10.0.0.1 port 58368 Sep 13 10:21:39.287349 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Sep 13 10:21:39.292231 systemd[1]: sshd@18-10.0.0.73:22-10.0.0.1:58368.service: Deactivated successfully. Sep 13 10:21:39.294382 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 10:21:39.295490 systemd-logind[1558]: Session 19 logged out. Waiting for processes to exit. Sep 13 10:21:39.296864 systemd-logind[1558]: Removed session 19. Sep 13 10:21:44.311821 systemd[1]: Started sshd@19-10.0.0.73:22-10.0.0.1:55254.service - OpenSSH per-connection server daemon (10.0.0.1:55254). Sep 13 10:21:44.368551 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 55254 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:21:44.370025 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:21:44.374595 systemd-logind[1558]: New session 20 of user core. Sep 13 10:21:44.384634 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 10:21:44.495471 sshd[4303]: Connection closed by 10.0.0.1 port 55254 Sep 13 10:21:44.495864 sshd-session[4300]: pam_unix(sshd:session): session closed for user core Sep 13 10:21:44.500548 systemd[1]: sshd@19-10.0.0.73:22-10.0.0.1:55254.service: Deactivated successfully. Sep 13 10:21:44.502775 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 10:21:44.503660 systemd-logind[1558]: Session 20 logged out. Waiting for processes to exit. Sep 13 10:21:44.505171 systemd-logind[1558]: Removed session 20. Sep 13 10:21:49.520341 systemd[1]: Started sshd@20-10.0.0.73:22-10.0.0.1:55258.service - OpenSSH per-connection server daemon (10.0.0.1:55258). Sep 13 10:21:49.570088 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 55258 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:21:49.571311 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:21:49.575438 systemd-logind[1558]: New session 21 of user core. Sep 13 10:21:49.590611 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 10:21:49.699311 sshd[4323]: Connection closed by 10.0.0.1 port 55258 Sep 13 10:21:49.699666 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Sep 13 10:21:49.704342 systemd[1]: sshd@20-10.0.0.73:22-10.0.0.1:55258.service: Deactivated successfully. Sep 13 10:21:49.707205 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 10:21:49.707965 systemd-logind[1558]: Session 21 logged out. Waiting for processes to exit. Sep 13 10:21:49.709113 systemd-logind[1558]: Removed session 21. Sep 13 10:21:54.716348 systemd[1]: Started sshd@21-10.0.0.73:22-10.0.0.1:53030.service - OpenSSH per-connection server daemon (10.0.0.1:53030). Sep 13 10:21:54.774316 sshd[4336]: Accepted publickey for core from 10.0.0.1 port 53030 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:21:54.775814 sshd-session[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:21:54.779969 systemd-logind[1558]: New session 22 of user core. Sep 13 10:21:54.789616 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 10:21:54.901486 sshd[4339]: Connection closed by 10.0.0.1 port 53030 Sep 13 10:21:54.901875 sshd-session[4336]: pam_unix(sshd:session): session closed for user core Sep 13 10:21:54.915251 systemd[1]: sshd@21-10.0.0.73:22-10.0.0.1:53030.service: Deactivated successfully. Sep 13 10:21:54.917257 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 10:21:54.918257 systemd-logind[1558]: Session 22 logged out. Waiting for processes to exit. Sep 13 10:21:54.921171 systemd[1]: Started sshd@22-10.0.0.73:22-10.0.0.1:53034.service - OpenSSH per-connection server daemon (10.0.0.1:53034). Sep 13 10:21:54.922037 systemd-logind[1558]: Removed session 22. Sep 13 10:21:54.977400 sshd[4352]: Accepted publickey for core from 10.0.0.1 port 53034 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:21:54.978643 sshd-session[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:21:54.983285 systemd-logind[1558]: New session 23 of user core. Sep 13 10:21:54.999627 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 10:21:56.395778 containerd[1578]: time="2025-09-13T10:21:56.395427861Z" level=info msg="StopContainer for \"3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2\" with timeout 30 (s)" Sep 13 10:21:56.404137 containerd[1578]: time="2025-09-13T10:21:56.404074987Z" level=info msg="Stop container \"3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2\" with signal terminated" Sep 13 10:21:56.418240 systemd[1]: cri-containerd-3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2.scope: Deactivated successfully. Sep 13 10:21:56.421950 containerd[1578]: time="2025-09-13T10:21:56.421894948Z" level=info msg="received exit event container_id:\"3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2\" id:\"3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2\" pid:3296 exited_at:{seconds:1757758916 nanos:421171398}" Sep 13 10:21:56.422095 containerd[1578]: time="2025-09-13T10:21:56.421999689Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2\" id:\"3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2\" pid:3296 exited_at:{seconds:1757758916 nanos:421171398}" Sep 13 10:21:56.442530 containerd[1578]: time="2025-09-13T10:21:56.442376765Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e\" id:\"48a4f815d3976f608b7a21aada5bf51f2f5f74f5e07a4564849ef30453338d40\" pid:4382 exited_at:{seconds:1757758916 nanos:441478208}" Sep 13 10:21:56.442766 containerd[1578]: time="2025-09-13T10:21:56.442721456Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 10:21:56.448533 containerd[1578]: time="2025-09-13T10:21:56.448448762Z" level=info msg="StopContainer for \"4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e\" with timeout 2 (s)" Sep 13 10:21:56.450643 containerd[1578]: time="2025-09-13T10:21:56.450607810Z" level=info msg="Stop container \"4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e\" with signal terminated" Sep 13 10:21:56.453572 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2-rootfs.mount: Deactivated successfully. Sep 13 10:21:56.463304 systemd-networkd[1490]: lxc_health: Link DOWN Sep 13 10:21:56.463315 systemd-networkd[1490]: lxc_health: Lost carrier Sep 13 10:21:56.468396 containerd[1578]: time="2025-09-13T10:21:56.468319784Z" level=info msg="StopContainer for \"3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2\" returns successfully" Sep 13 10:21:56.472565 containerd[1578]: time="2025-09-13T10:21:56.472360869Z" level=info msg="StopPodSandbox for \"296dc02beeff01c541babc3b92db8b2274eff8c6d92707d59edf05989b7e6154\"" Sep 13 10:21:56.472565 containerd[1578]: time="2025-09-13T10:21:56.472469508Z" level=info msg="Container to stop \"3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 10:21:56.483485 systemd[1]: cri-containerd-296dc02beeff01c541babc3b92db8b2274eff8c6d92707d59edf05989b7e6154.scope: Deactivated successfully. Sep 13 10:21:56.485669 containerd[1578]: time="2025-09-13T10:21:56.485370338Z" level=info msg="TaskExit event in podsandbox handler container_id:\"296dc02beeff01c541babc3b92db8b2274eff8c6d92707d59edf05989b7e6154\" id:\"296dc02beeff01c541babc3b92db8b2274eff8c6d92707d59edf05989b7e6154\" pid:2980 exit_status:137 exited_at:{seconds:1757758916 nanos:484969137}" Sep 13 10:21:56.485884 systemd[1]: cri-containerd-4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e.scope: Deactivated successfully. Sep 13 10:21:56.486332 systemd[1]: cri-containerd-4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e.scope: Consumed 6.541s CPU time, 124.9M memory peak, 232K read from disk, 13.3M written to disk. Sep 13 10:21:56.488375 containerd[1578]: time="2025-09-13T10:21:56.488264599Z" level=info msg="received exit event container_id:\"4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e\" id:\"4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e\" pid:3408 exited_at:{seconds:1757758916 nanos:487805377}" Sep 13 10:21:56.516011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e-rootfs.mount: Deactivated successfully. Sep 13 10:21:56.528966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-296dc02beeff01c541babc3b92db8b2274eff8c6d92707d59edf05989b7e6154-rootfs.mount: Deactivated successfully. Sep 13 10:21:56.530314 containerd[1578]: time="2025-09-13T10:21:56.530273600Z" level=info msg="shim disconnected" id=296dc02beeff01c541babc3b92db8b2274eff8c6d92707d59edf05989b7e6154 namespace=k8s.io Sep 13 10:21:56.530314 containerd[1578]: time="2025-09-13T10:21:56.530309009Z" level=warning msg="cleaning up after shim disconnected" id=296dc02beeff01c541babc3b92db8b2274eff8c6d92707d59edf05989b7e6154 namespace=k8s.io Sep 13 10:21:56.543039 containerd[1578]: time="2025-09-13T10:21:56.530318567Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 10:21:56.543220 containerd[1578]: time="2025-09-13T10:21:56.530939270Z" level=info msg="StopContainer for \"4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e\" returns successfully" Sep 13 10:21:56.543941 containerd[1578]: time="2025-09-13T10:21:56.543906497Z" level=info msg="StopPodSandbox for \"c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120\"" Sep 13 10:21:56.544017 containerd[1578]: time="2025-09-13T10:21:56.543976481Z" level=info msg="Container to stop \"64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 10:21:56.544017 containerd[1578]: time="2025-09-13T10:21:56.543993173Z" level=info msg="Container to stop \"2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 10:21:56.544017 containerd[1578]: time="2025-09-13T10:21:56.544003793Z" level=info msg="Container to stop \"ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 10:21:56.544017 containerd[1578]: time="2025-09-13T10:21:56.544014164Z" level=info msg="Container to stop \"d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 10:21:56.544162 containerd[1578]: time="2025-09-13T10:21:56.544024573Z" level=info msg="Container to stop \"4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 10:21:56.551712 systemd[1]: cri-containerd-c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120.scope: Deactivated successfully. Sep 13 10:21:56.570240 containerd[1578]: time="2025-09-13T10:21:56.570166515Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e\" id:\"4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e\" pid:3408 exited_at:{seconds:1757758916 nanos:487805377}" Sep 13 10:21:56.570240 containerd[1578]: time="2025-09-13T10:21:56.570219598Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120\" id:\"c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120\" pid:2891 exit_status:137 exited_at:{seconds:1757758916 nanos:552294874}" Sep 13 10:21:56.572164 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-296dc02beeff01c541babc3b92db8b2274eff8c6d92707d59edf05989b7e6154-shm.mount: Deactivated successfully. Sep 13 10:21:56.582571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120-rootfs.mount: Deactivated successfully. Sep 13 10:21:56.587029 containerd[1578]: time="2025-09-13T10:21:56.586975364Z" level=info msg="received exit event sandbox_id:\"296dc02beeff01c541babc3b92db8b2274eff8c6d92707d59edf05989b7e6154\" exit_status:137 exited_at:{seconds:1757758916 nanos:484969137}" Sep 13 10:21:56.590432 containerd[1578]: time="2025-09-13T10:21:56.590370709Z" level=info msg="TearDown network for sandbox \"296dc02beeff01c541babc3b92db8b2274eff8c6d92707d59edf05989b7e6154\" successfully" Sep 13 10:21:56.590432 containerd[1578]: time="2025-09-13T10:21:56.590423339Z" level=info msg="StopPodSandbox for \"296dc02beeff01c541babc3b92db8b2274eff8c6d92707d59edf05989b7e6154\" returns successfully" Sep 13 10:21:56.592606 containerd[1578]: time="2025-09-13T10:21:56.592566908Z" level=info msg="received exit event sandbox_id:\"c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120\" exit_status:137 exited_at:{seconds:1757758916 nanos:552294874}" Sep 13 10:21:56.592848 containerd[1578]: time="2025-09-13T10:21:56.592819935Z" level=info msg="shim disconnected" id=c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120 namespace=k8s.io Sep 13 10:21:56.592848 containerd[1578]: time="2025-09-13T10:21:56.592846255Z" level=warning msg="cleaning up after shim disconnected" id=c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120 namespace=k8s.io Sep 13 10:21:56.593035 containerd[1578]: time="2025-09-13T10:21:56.592854121Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 10:21:56.593826 containerd[1578]: time="2025-09-13T10:21:56.593636183Z" level=info msg="TearDown network for sandbox \"c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120\" successfully" Sep 13 10:21:56.593826 containerd[1578]: time="2025-09-13T10:21:56.593668104Z" level=info msg="StopPodSandbox for \"c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120\" returns successfully" Sep 13 10:21:56.685623 kubelet[2739]: I0913 10:21:56.685578 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-bpf-maps\") pod \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " Sep 13 10:21:56.685623 kubelet[2739]: I0913 10:21:56.685619 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-lib-modules\") pod \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " Sep 13 10:21:56.685623 kubelet[2739]: I0913 10:21:56.685635 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-host-proc-sys-kernel\") pod \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " Sep 13 10:21:56.686371 kubelet[2739]: I0913 10:21:56.685658 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn69f\" (UniqueName: \"kubernetes.io/projected/eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd-kube-api-access-kn69f\") pod \"eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd\" (UID: \"eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd\") " Sep 13 10:21:56.686371 kubelet[2739]: I0913 10:21:56.685671 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-cilium-cgroup\") pod \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " Sep 13 10:21:56.686371 kubelet[2739]: I0913 10:21:56.685687 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-cilium-config-path\") pod \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " Sep 13 10:21:56.686371 kubelet[2739]: I0913 10:21:56.685700 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-cni-path\") pod \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " Sep 13 10:21:56.686371 kubelet[2739]: I0913 10:21:56.685711 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-hostproc\") pod \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " Sep 13 10:21:56.686371 kubelet[2739]: I0913 10:21:56.685724 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-host-proc-sys-net\") pod \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " Sep 13 10:21:56.686780 kubelet[2739]: I0913 10:21:56.685739 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46wjq\" (UniqueName: \"kubernetes.io/projected/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-kube-api-access-46wjq\") pod \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " Sep 13 10:21:56.686780 kubelet[2739]: I0913 10:21:56.685736 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "878a6ecc-c3dc-4e20-bc71-8036d0f91f72" (UID: "878a6ecc-c3dc-4e20-bc71-8036d0f91f72"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 10:21:56.686780 kubelet[2739]: I0913 10:21:56.685753 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd-cilium-config-path\") pod \"eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd\" (UID: \"eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd\") " Sep 13 10:21:56.686780 kubelet[2739]: I0913 10:21:56.685742 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "878a6ecc-c3dc-4e20-bc71-8036d0f91f72" (UID: "878a6ecc-c3dc-4e20-bc71-8036d0f91f72"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 10:21:56.686780 kubelet[2739]: I0913 10:21:56.685782 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "878a6ecc-c3dc-4e20-bc71-8036d0f91f72" (UID: "878a6ecc-c3dc-4e20-bc71-8036d0f91f72"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 10:21:56.686905 kubelet[2739]: I0913 10:21:56.685785 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "878a6ecc-c3dc-4e20-bc71-8036d0f91f72" (UID: "878a6ecc-c3dc-4e20-bc71-8036d0f91f72"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 10:21:56.686905 kubelet[2739]: I0913 10:21:56.685798 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-cni-path" (OuterVolumeSpecName: "cni-path") pod "878a6ecc-c3dc-4e20-bc71-8036d0f91f72" (UID: "878a6ecc-c3dc-4e20-bc71-8036d0f91f72"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 10:21:56.686905 kubelet[2739]: I0913 10:21:56.685767 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-etc-cni-netd\") pod \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " Sep 13 10:21:56.686905 kubelet[2739]: I0913 10:21:56.685804 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-hostproc" (OuterVolumeSpecName: "hostproc") pod "878a6ecc-c3dc-4e20-bc71-8036d0f91f72" (UID: "878a6ecc-c3dc-4e20-bc71-8036d0f91f72"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 10:21:56.686905 kubelet[2739]: I0913 10:21:56.685813 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "878a6ecc-c3dc-4e20-bc71-8036d0f91f72" (UID: "878a6ecc-c3dc-4e20-bc71-8036d0f91f72"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 10:21:56.687020 kubelet[2739]: I0913 10:21:56.685858 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "878a6ecc-c3dc-4e20-bc71-8036d0f91f72" (UID: "878a6ecc-c3dc-4e20-bc71-8036d0f91f72"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 10:21:56.687020 kubelet[2739]: I0913 10:21:56.686598 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-cilium-run\") pod \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " Sep 13 10:21:56.687020 kubelet[2739]: I0913 10:21:56.686628 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-hubble-tls\") pod \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " Sep 13 10:21:56.687020 kubelet[2739]: I0913 10:21:56.686644 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-xtables-lock\") pod \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " Sep 13 10:21:56.687020 kubelet[2739]: I0913 10:21:56.686661 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-clustermesh-secrets\") pod \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\" (UID: \"878a6ecc-c3dc-4e20-bc71-8036d0f91f72\") " Sep 13 10:21:56.687020 kubelet[2739]: I0913 10:21:56.686691 2739 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 10:21:56.687159 kubelet[2739]: I0913 10:21:56.686717 2739 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 10:21:56.687159 kubelet[2739]: I0913 10:21:56.686725 2739 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 10:21:56.687159 kubelet[2739]: I0913 10:21:56.686733 2739 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 10:21:56.687159 kubelet[2739]: I0913 10:21:56.686745 2739 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 10:21:56.687159 kubelet[2739]: I0913 10:21:56.686753 2739 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 10:21:56.687159 kubelet[2739]: I0913 10:21:56.686761 2739 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 10:21:56.687159 kubelet[2739]: I0913 10:21:56.686769 2739 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 10:21:56.689195 kubelet[2739]: I0913 10:21:56.689170 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd" (UID: "eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 10:21:56.689311 kubelet[2739]: I0913 10:21:56.689289 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "878a6ecc-c3dc-4e20-bc71-8036d0f91f72" (UID: "878a6ecc-c3dc-4e20-bc71-8036d0f91f72"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 10:21:56.689376 kubelet[2739]: I0913 10:21:56.689358 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "878a6ecc-c3dc-4e20-bc71-8036d0f91f72" (UID: "878a6ecc-c3dc-4e20-bc71-8036d0f91f72"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 10:21:56.689466 kubelet[2739]: I0913 10:21:56.689386 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "878a6ecc-c3dc-4e20-bc71-8036d0f91f72" (UID: "878a6ecc-c3dc-4e20-bc71-8036d0f91f72"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 10:21:56.691078 kubelet[2739]: I0913 10:21:56.691025 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd-kube-api-access-kn69f" (OuterVolumeSpecName: "kube-api-access-kn69f") pod "eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd" (UID: "eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd"). InnerVolumeSpecName "kube-api-access-kn69f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 10:21:56.691235 kubelet[2739]: I0913 10:21:56.691122 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "878a6ecc-c3dc-4e20-bc71-8036d0f91f72" (UID: "878a6ecc-c3dc-4e20-bc71-8036d0f91f72"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 10:21:56.691877 kubelet[2739]: I0913 10:21:56.691850 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-kube-api-access-46wjq" (OuterVolumeSpecName: "kube-api-access-46wjq") pod "878a6ecc-c3dc-4e20-bc71-8036d0f91f72" (UID: "878a6ecc-c3dc-4e20-bc71-8036d0f91f72"). InnerVolumeSpecName "kube-api-access-46wjq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 10:21:56.692793 kubelet[2739]: I0913 10:21:56.692770 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "878a6ecc-c3dc-4e20-bc71-8036d0f91f72" (UID: "878a6ecc-c3dc-4e20-bc71-8036d0f91f72"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 10:21:56.787257 kubelet[2739]: I0913 10:21:56.787184 2739 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-46wjq\" (UniqueName: \"kubernetes.io/projected/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-kube-api-access-46wjq\") on node \"localhost\" DevicePath \"\"" Sep 13 10:21:56.787257 kubelet[2739]: I0913 10:21:56.787237 2739 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 10:21:56.787257 kubelet[2739]: I0913 10:21:56.787254 2739 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 10:21:56.787257 kubelet[2739]: I0913 10:21:56.787266 2739 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 10:21:56.787542 kubelet[2739]: I0913 10:21:56.787283 2739 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 10:21:56.787542 kubelet[2739]: I0913 10:21:56.787295 2739 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 10:21:56.787542 kubelet[2739]: I0913 10:21:56.787308 2739 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kn69f\" (UniqueName: \"kubernetes.io/projected/eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd-kube-api-access-kn69f\") on node \"localhost\" DevicePath \"\"" Sep 13 10:21:56.787542 kubelet[2739]: I0913 10:21:56.787320 2739 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/878a6ecc-c3dc-4e20-bc71-8036d0f91f72-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 10:21:57.418948 kubelet[2739]: I0913 10:21:57.418915 2739 scope.go:117] "RemoveContainer" containerID="3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2" Sep 13 10:21:57.422103 containerd[1578]: time="2025-09-13T10:21:57.422070387Z" level=info msg="RemoveContainer for \"3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2\"" Sep 13 10:21:57.425030 systemd[1]: Removed slice kubepods-besteffort-podeac7faa3_dc29_4ce9_8fa7_7ac08c65a2bd.slice - libcontainer container kubepods-besteffort-podeac7faa3_dc29_4ce9_8fa7_7ac08c65a2bd.slice. Sep 13 10:21:57.434201 systemd[1]: Removed slice kubepods-burstable-pod878a6ecc_c3dc_4e20_bc71_8036d0f91f72.slice - libcontainer container kubepods-burstable-pod878a6ecc_c3dc_4e20_bc71_8036d0f91f72.slice. Sep 13 10:21:57.434331 systemd[1]: kubepods-burstable-pod878a6ecc_c3dc_4e20_bc71_8036d0f91f72.slice: Consumed 6.667s CPU time, 125.2M memory peak, 236K read from disk, 13.3M written to disk. Sep 13 10:21:57.451403 systemd[1]: var-lib-kubelet-pods-eac7faa3\x2ddc29\x2d4ce9\x2d8fa7\x2d7ac08c65a2bd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkn69f.mount: Deactivated successfully. Sep 13 10:21:57.451528 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c713507c1886dda57a6a01f5e600bb8e901a73555de8df34216f012f619a8120-shm.mount: Deactivated successfully. Sep 13 10:21:57.451607 systemd[1]: var-lib-kubelet-pods-878a6ecc\x2dc3dc\x2d4e20\x2dbc71\x2d8036d0f91f72-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d46wjq.mount: Deactivated successfully. Sep 13 10:21:57.451688 systemd[1]: var-lib-kubelet-pods-878a6ecc\x2dc3dc\x2d4e20\x2dbc71\x2d8036d0f91f72-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 10:21:57.451767 systemd[1]: var-lib-kubelet-pods-878a6ecc\x2dc3dc\x2d4e20\x2dbc71\x2d8036d0f91f72-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 10:21:57.511092 containerd[1578]: time="2025-09-13T10:21:57.511025338Z" level=info msg="RemoveContainer for \"3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2\" returns successfully" Sep 13 10:21:57.511520 kubelet[2739]: I0913 10:21:57.511456 2739 scope.go:117] "RemoveContainer" containerID="3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2" Sep 13 10:21:57.511807 containerd[1578]: time="2025-09-13T10:21:57.511771029Z" level=error msg="ContainerStatus for \"3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2\": not found" Sep 13 10:21:57.513416 kubelet[2739]: E0913 10:21:57.513358 2739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2\": not found" containerID="3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2" Sep 13 10:21:57.513482 kubelet[2739]: I0913 10:21:57.513405 2739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2"} err="failed to get container status \"3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"3d55880cb85ddf0c476312655569124f79384e92077f25f74f550d5356b3e3e2\": not found" Sep 13 10:21:57.513482 kubelet[2739]: I0913 10:21:57.513455 2739 scope.go:117] "RemoveContainer" containerID="4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e" Sep 13 10:21:57.515202 containerd[1578]: time="2025-09-13T10:21:57.515168514Z" level=info msg="RemoveContainer for \"4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e\"" Sep 13 10:21:57.568808 containerd[1578]: time="2025-09-13T10:21:57.568736050Z" level=info msg="RemoveContainer for \"4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e\" returns successfully" Sep 13 10:21:57.569005 kubelet[2739]: I0913 10:21:57.568983 2739 scope.go:117] "RemoveContainer" containerID="64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b" Sep 13 10:21:57.574896 containerd[1578]: time="2025-09-13T10:21:57.574855110Z" level=info msg="RemoveContainer for \"64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b\"" Sep 13 10:21:57.586939 containerd[1578]: time="2025-09-13T10:21:57.586883779Z" level=info msg="RemoveContainer for \"64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b\" returns successfully" Sep 13 10:21:57.587299 kubelet[2739]: I0913 10:21:57.587239 2739 scope.go:117] "RemoveContainer" containerID="d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4" Sep 13 10:21:57.589441 containerd[1578]: time="2025-09-13T10:21:57.589385794Z" level=info msg="RemoveContainer for \"d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4\"" Sep 13 10:21:57.594923 containerd[1578]: time="2025-09-13T10:21:57.594879193Z" level=info msg="RemoveContainer for \"d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4\" returns successfully" Sep 13 10:21:57.595169 kubelet[2739]: I0913 10:21:57.595068 2739 scope.go:117] "RemoveContainer" containerID="ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2" Sep 13 10:21:57.596394 containerd[1578]: time="2025-09-13T10:21:57.596368683Z" level=info msg="RemoveContainer for \"ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2\"" Sep 13 10:21:57.600525 containerd[1578]: time="2025-09-13T10:21:57.600481771Z" level=info msg="RemoveContainer for \"ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2\" returns successfully" Sep 13 10:21:57.600693 kubelet[2739]: I0913 10:21:57.600672 2739 scope.go:117] "RemoveContainer" containerID="2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc" Sep 13 10:21:57.601858 containerd[1578]: time="2025-09-13T10:21:57.601818319Z" level=info msg="RemoveContainer for \"2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc\"" Sep 13 10:21:57.605825 containerd[1578]: time="2025-09-13T10:21:57.605792038Z" level=info msg="RemoveContainer for \"2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc\" returns successfully" Sep 13 10:21:57.605982 kubelet[2739]: I0913 10:21:57.605949 2739 scope.go:117] "RemoveContainer" containerID="4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e" Sep 13 10:21:57.606173 containerd[1578]: time="2025-09-13T10:21:57.606138644Z" level=error msg="ContainerStatus for \"4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e\": not found" Sep 13 10:21:57.606285 kubelet[2739]: E0913 10:21:57.606257 2739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e\": not found" containerID="4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e" Sep 13 10:21:57.606325 kubelet[2739]: I0913 10:21:57.606300 2739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e"} err="failed to get container status \"4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e\": rpc error: code = NotFound desc = an error occurred when try to find container \"4cbd56654d7ae94228c24bf3a045f9da31744fd32e348eb6273032474761858e\": not found" Sep 13 10:21:57.606361 kubelet[2739]: I0913 10:21:57.606326 2739 scope.go:117] "RemoveContainer" containerID="64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b" Sep 13 10:21:57.606779 containerd[1578]: time="2025-09-13T10:21:57.606715262Z" level=error msg="ContainerStatus for \"64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b\": not found" Sep 13 10:21:57.606904 kubelet[2739]: E0913 10:21:57.606884 2739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b\": not found" containerID="64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b" Sep 13 10:21:57.606934 kubelet[2739]: I0913 10:21:57.606907 2739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b"} err="failed to get container status \"64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b\": rpc error: code = NotFound desc = an error occurred when try to find container \"64a9b0a75a88d2d0777428bc5ba0831f71dcc124c31a414a6a7f1f145e1a710b\": not found" Sep 13 10:21:57.606934 kubelet[2739]: I0913 10:21:57.606927 2739 scope.go:117] "RemoveContainer" containerID="d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4" Sep 13 10:21:57.607100 containerd[1578]: time="2025-09-13T10:21:57.607074241Z" level=error msg="ContainerStatus for \"d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4\": not found" Sep 13 10:21:57.607293 kubelet[2739]: E0913 10:21:57.607267 2739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4\": not found" containerID="d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4" Sep 13 10:21:57.607342 kubelet[2739]: I0913 10:21:57.607297 2739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4"} err="failed to get container status \"d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7d2b29f211ae679b3c9ad90b3444a4ec86c6decf1c92265c274b54b47b4dab4\": not found" Sep 13 10:21:57.607342 kubelet[2739]: I0913 10:21:57.607315 2739 scope.go:117] "RemoveContainer" containerID="ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2" Sep 13 10:21:57.607491 containerd[1578]: time="2025-09-13T10:21:57.607461495Z" level=error msg="ContainerStatus for \"ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2\": not found" Sep 13 10:21:57.607781 kubelet[2739]: E0913 10:21:57.607739 2739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2\": not found" containerID="ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2" Sep 13 10:21:57.607831 kubelet[2739]: I0913 10:21:57.607796 2739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2"} err="failed to get container status \"ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"ed0032aeab5b44ea8a19a3d883a1dcb1d6d3523b7af109d58ce794535730d5e2\": not found" Sep 13 10:21:57.607857 kubelet[2739]: I0913 10:21:57.607837 2739 scope.go:117] "RemoveContainer" containerID="2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc" Sep 13 10:21:57.608096 containerd[1578]: time="2025-09-13T10:21:57.608068821Z" level=error msg="ContainerStatus for \"2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc\": not found" Sep 13 10:21:57.608208 kubelet[2739]: E0913 10:21:57.608191 2739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc\": not found" containerID="2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc" Sep 13 10:21:57.608319 kubelet[2739]: I0913 10:21:57.608212 2739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc"} err="failed to get container status \"2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"2db860a2b1bd787b5aa938beac14087ef20bd7d2693b2c286977f040cf1e88dc\": not found" Sep 13 10:21:58.218168 kubelet[2739]: I0913 10:21:58.218094 2739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="878a6ecc-c3dc-4e20-bc71-8036d0f91f72" path="/var/lib/kubelet/pods/878a6ecc-c3dc-4e20-bc71-8036d0f91f72/volumes" Sep 13 10:21:58.219183 kubelet[2739]: I0913 10:21:58.219154 2739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd" path="/var/lib/kubelet/pods/eac7faa3-dc29-4ce9-8fa7-7ac08c65a2bd/volumes" Sep 13 10:21:58.308459 sshd[4355]: Connection closed by 10.0.0.1 port 53034 Sep 13 10:21:58.309020 sshd-session[4352]: pam_unix(sshd:session): session closed for user core Sep 13 10:21:58.318683 systemd[1]: sshd@22-10.0.0.73:22-10.0.0.1:53034.service: Deactivated successfully. Sep 13 10:21:58.320700 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 10:21:58.321782 systemd-logind[1558]: Session 23 logged out. Waiting for processes to exit. Sep 13 10:21:58.324677 systemd[1]: Started sshd@23-10.0.0.73:22-10.0.0.1:53038.service - OpenSSH per-connection server daemon (10.0.0.1:53038). Sep 13 10:21:58.325287 systemd-logind[1558]: Removed session 23. Sep 13 10:21:58.387195 sshd[4506]: Accepted publickey for core from 10.0.0.1 port 53038 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:21:58.388983 sshd-session[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:21:58.394011 systemd-logind[1558]: New session 24 of user core. Sep 13 10:21:58.407634 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 10:21:59.096722 sshd[4509]: Connection closed by 10.0.0.1 port 53038 Sep 13 10:21:59.098555 sshd-session[4506]: pam_unix(sshd:session): session closed for user core Sep 13 10:21:59.111747 systemd[1]: sshd@23-10.0.0.73:22-10.0.0.1:53038.service: Deactivated successfully. Sep 13 10:21:59.115099 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 10:21:59.121779 systemd-logind[1558]: Session 24 logged out. Waiting for processes to exit. Sep 13 10:21:59.123802 systemd-logind[1558]: Removed session 24. Sep 13 10:21:59.129049 systemd[1]: Started sshd@24-10.0.0.73:22-10.0.0.1:53046.service - OpenSSH per-connection server daemon (10.0.0.1:53046). Sep 13 10:21:59.151086 systemd[1]: Created slice kubepods-burstable-podaeb27001_b3c3_4b57_9673_f3a4c8d75524.slice - libcontainer container kubepods-burstable-podaeb27001_b3c3_4b57_9673_f3a4c8d75524.slice. Sep 13 10:21:59.201831 kubelet[2739]: I0913 10:21:59.201760 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aeb27001-b3c3-4b57-9673-f3a4c8d75524-cilium-cgroup\") pod \"cilium-xvxxl\" (UID: \"aeb27001-b3c3-4b57-9673-f3a4c8d75524\") " pod="kube-system/cilium-xvxxl" Sep 13 10:21:59.201831 kubelet[2739]: I0913 10:21:59.201823 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w267r\" (UniqueName: \"kubernetes.io/projected/aeb27001-b3c3-4b57-9673-f3a4c8d75524-kube-api-access-w267r\") pod \"cilium-xvxxl\" (UID: \"aeb27001-b3c3-4b57-9673-f3a4c8d75524\") " pod="kube-system/cilium-xvxxl" Sep 13 10:21:59.201831 kubelet[2739]: I0913 10:21:59.201842 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aeb27001-b3c3-4b57-9673-f3a4c8d75524-hubble-tls\") pod \"cilium-xvxxl\" (UID: \"aeb27001-b3c3-4b57-9673-f3a4c8d75524\") " pod="kube-system/cilium-xvxxl" Sep 13 10:21:59.202015 kubelet[2739]: I0913 10:21:59.201857 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aeb27001-b3c3-4b57-9673-f3a4c8d75524-cni-path\") pod \"cilium-xvxxl\" (UID: \"aeb27001-b3c3-4b57-9673-f3a4c8d75524\") " pod="kube-system/cilium-xvxxl" Sep 13 10:21:59.202015 kubelet[2739]: I0913 10:21:59.201874 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aeb27001-b3c3-4b57-9673-f3a4c8d75524-bpf-maps\") pod \"cilium-xvxxl\" (UID: \"aeb27001-b3c3-4b57-9673-f3a4c8d75524\") " pod="kube-system/cilium-xvxxl" Sep 13 10:21:59.202015 kubelet[2739]: I0913 10:21:59.201916 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aeb27001-b3c3-4b57-9673-f3a4c8d75524-hostproc\") pod \"cilium-xvxxl\" (UID: \"aeb27001-b3c3-4b57-9673-f3a4c8d75524\") " pod="kube-system/cilium-xvxxl" Sep 13 10:21:59.202015 kubelet[2739]: I0913 10:21:59.201933 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aeb27001-b3c3-4b57-9673-f3a4c8d75524-clustermesh-secrets\") pod \"cilium-xvxxl\" (UID: \"aeb27001-b3c3-4b57-9673-f3a4c8d75524\") " pod="kube-system/cilium-xvxxl" Sep 13 10:21:59.202015 kubelet[2739]: I0913 10:21:59.201952 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aeb27001-b3c3-4b57-9673-f3a4c8d75524-host-proc-sys-net\") pod \"cilium-xvxxl\" (UID: \"aeb27001-b3c3-4b57-9673-f3a4c8d75524\") " pod="kube-system/cilium-xvxxl" Sep 13 10:21:59.202015 kubelet[2739]: I0913 10:21:59.201968 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aeb27001-b3c3-4b57-9673-f3a4c8d75524-etc-cni-netd\") pod \"cilium-xvxxl\" (UID: \"aeb27001-b3c3-4b57-9673-f3a4c8d75524\") " pod="kube-system/cilium-xvxxl" Sep 13 10:21:59.202151 kubelet[2739]: I0913 10:21:59.201986 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aeb27001-b3c3-4b57-9673-f3a4c8d75524-lib-modules\") pod \"cilium-xvxxl\" (UID: \"aeb27001-b3c3-4b57-9673-f3a4c8d75524\") " pod="kube-system/cilium-xvxxl" Sep 13 10:21:59.202151 kubelet[2739]: I0913 10:21:59.202003 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aeb27001-b3c3-4b57-9673-f3a4c8d75524-host-proc-sys-kernel\") pod \"cilium-xvxxl\" (UID: \"aeb27001-b3c3-4b57-9673-f3a4c8d75524\") " pod="kube-system/cilium-xvxxl" Sep 13 10:21:59.202151 kubelet[2739]: I0913 10:21:59.202022 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aeb27001-b3c3-4b57-9673-f3a4c8d75524-xtables-lock\") pod \"cilium-xvxxl\" (UID: \"aeb27001-b3c3-4b57-9673-f3a4c8d75524\") " pod="kube-system/cilium-xvxxl" Sep 13 10:21:59.202151 kubelet[2739]: I0913 10:21:59.202043 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aeb27001-b3c3-4b57-9673-f3a4c8d75524-cilium-ipsec-secrets\") pod \"cilium-xvxxl\" (UID: \"aeb27001-b3c3-4b57-9673-f3a4c8d75524\") " pod="kube-system/cilium-xvxxl" Sep 13 10:21:59.202151 kubelet[2739]: I0913 10:21:59.202065 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aeb27001-b3c3-4b57-9673-f3a4c8d75524-cilium-run\") pod \"cilium-xvxxl\" (UID: \"aeb27001-b3c3-4b57-9673-f3a4c8d75524\") " pod="kube-system/cilium-xvxxl" Sep 13 10:21:59.202151 kubelet[2739]: I0913 10:21:59.202139 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aeb27001-b3c3-4b57-9673-f3a4c8d75524-cilium-config-path\") pod \"cilium-xvxxl\" (UID: \"aeb27001-b3c3-4b57-9673-f3a4c8d75524\") " pod="kube-system/cilium-xvxxl" Sep 13 10:21:59.210229 sshd[4522]: Accepted publickey for core from 10.0.0.1 port 53046 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:21:59.212224 sshd-session[4522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:21:59.217990 systemd-logind[1558]: New session 25 of user core. Sep 13 10:21:59.227678 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 10:21:59.279845 sshd[4525]: Connection closed by 10.0.0.1 port 53046 Sep 13 10:21:59.280289 sshd-session[4522]: pam_unix(sshd:session): session closed for user core Sep 13 10:21:59.293530 systemd[1]: sshd@24-10.0.0.73:22-10.0.0.1:53046.service: Deactivated successfully. Sep 13 10:21:59.295679 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 10:21:59.296561 systemd-logind[1558]: Session 25 logged out. Waiting for processes to exit. Sep 13 10:21:59.299423 systemd[1]: Started sshd@25-10.0.0.73:22-10.0.0.1:53056.service - OpenSSH per-connection server daemon (10.0.0.1:53056). Sep 13 10:21:59.300481 systemd-logind[1558]: Removed session 25. Sep 13 10:21:59.351032 sshd[4532]: Accepted publickey for core from 10.0.0.1 port 53056 ssh2: RSA SHA256:2ENyVARP+aFeL//FBPXNrghJd+MeTVi18Y0SE+5Vbaw Sep 13 10:21:59.352217 sshd-session[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:21:59.356421 systemd-logind[1558]: New session 26 of user core. Sep 13 10:21:59.366637 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 10:21:59.459027 containerd[1578]: time="2025-09-13T10:21:59.458965621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xvxxl,Uid:aeb27001-b3c3-4b57-9673-f3a4c8d75524,Namespace:kube-system,Attempt:0,}" Sep 13 10:21:59.479540 containerd[1578]: time="2025-09-13T10:21:59.479361670Z" level=info msg="connecting to shim f3f209f8dd2ff7f9641182df16805a139bb8ba99fc3a7f30fe5aed1eee21d572" address="unix:///run/containerd/s/3a4ee12ceac05f0d0bf288d3d53470785702b88bfd94f4456104080ba1e46333" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:21:59.511699 systemd[1]: Started cri-containerd-f3f209f8dd2ff7f9641182df16805a139bb8ba99fc3a7f30fe5aed1eee21d572.scope - libcontainer container f3f209f8dd2ff7f9641182df16805a139bb8ba99fc3a7f30fe5aed1eee21d572. Sep 13 10:21:59.538518 containerd[1578]: time="2025-09-13T10:21:59.538453738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xvxxl,Uid:aeb27001-b3c3-4b57-9673-f3a4c8d75524,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3f209f8dd2ff7f9641182df16805a139bb8ba99fc3a7f30fe5aed1eee21d572\"" Sep 13 10:21:59.546119 containerd[1578]: time="2025-09-13T10:21:59.546068683Z" level=info msg="CreateContainer within sandbox \"f3f209f8dd2ff7f9641182df16805a139bb8ba99fc3a7f30fe5aed1eee21d572\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 10:21:59.553857 containerd[1578]: time="2025-09-13T10:21:59.553809631Z" level=info msg="Container 170b65977cb981259dbee553c7fface1c2fe9c55b9c6a10c3c8dee8a7ee8adfa: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:21:59.560408 containerd[1578]: time="2025-09-13T10:21:59.560367519Z" level=info msg="CreateContainer within sandbox \"f3f209f8dd2ff7f9641182df16805a139bb8ba99fc3a7f30fe5aed1eee21d572\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"170b65977cb981259dbee553c7fface1c2fe9c55b9c6a10c3c8dee8a7ee8adfa\"" Sep 13 10:21:59.560988 containerd[1578]: time="2025-09-13T10:21:59.560947551Z" level=info msg="StartContainer for \"170b65977cb981259dbee553c7fface1c2fe9c55b9c6a10c3c8dee8a7ee8adfa\"" Sep 13 10:21:59.562009 containerd[1578]: time="2025-09-13T10:21:59.561980352Z" level=info msg="connecting to shim 170b65977cb981259dbee553c7fface1c2fe9c55b9c6a10c3c8dee8a7ee8adfa" address="unix:///run/containerd/s/3a4ee12ceac05f0d0bf288d3d53470785702b88bfd94f4456104080ba1e46333" protocol=ttrpc version=3 Sep 13 10:21:59.583677 systemd[1]: Started cri-containerd-170b65977cb981259dbee553c7fface1c2fe9c55b9c6a10c3c8dee8a7ee8adfa.scope - libcontainer container 170b65977cb981259dbee553c7fface1c2fe9c55b9c6a10c3c8dee8a7ee8adfa. Sep 13 10:21:59.624004 systemd[1]: cri-containerd-170b65977cb981259dbee553c7fface1c2fe9c55b9c6a10c3c8dee8a7ee8adfa.scope: Deactivated successfully. Sep 13 10:21:59.625554 containerd[1578]: time="2025-09-13T10:21:59.625459856Z" level=info msg="TaskExit event in podsandbox handler container_id:\"170b65977cb981259dbee553c7fface1c2fe9c55b9c6a10c3c8dee8a7ee8adfa\" id:\"170b65977cb981259dbee553c7fface1c2fe9c55b9c6a10c3c8dee8a7ee8adfa\" pid:4605 exited_at:{seconds:1757758919 nanos:625004883}" Sep 13 10:21:59.747253 containerd[1578]: time="2025-09-13T10:21:59.747179673Z" level=info msg="received exit event container_id:\"170b65977cb981259dbee553c7fface1c2fe9c55b9c6a10c3c8dee8a7ee8adfa\" id:\"170b65977cb981259dbee553c7fface1c2fe9c55b9c6a10c3c8dee8a7ee8adfa\" pid:4605 exited_at:{seconds:1757758919 nanos:625004883}" Sep 13 10:21:59.748705 containerd[1578]: time="2025-09-13T10:21:59.748677987Z" level=info msg="StartContainer for \"170b65977cb981259dbee553c7fface1c2fe9c55b9c6a10c3c8dee8a7ee8adfa\" returns successfully" Sep 13 10:22:00.277795 kubelet[2739]: E0913 10:22:00.277740 2739 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 10:22:00.438458 containerd[1578]: time="2025-09-13T10:22:00.438002908Z" level=info msg="CreateContainer within sandbox \"f3f209f8dd2ff7f9641182df16805a139bb8ba99fc3a7f30fe5aed1eee21d572\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 10:22:00.447120 containerd[1578]: time="2025-09-13T10:22:00.447073364Z" level=info msg="Container 9b9b5a47578f881040125cf1aff79f2b707538aa7a270f6f3d339f744d073fc8: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:22:00.453956 containerd[1578]: time="2025-09-13T10:22:00.453900131Z" level=info msg="CreateContainer within sandbox \"f3f209f8dd2ff7f9641182df16805a139bb8ba99fc3a7f30fe5aed1eee21d572\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9b9b5a47578f881040125cf1aff79f2b707538aa7a270f6f3d339f744d073fc8\"" Sep 13 10:22:00.454428 containerd[1578]: time="2025-09-13T10:22:00.454396312Z" level=info msg="StartContainer for \"9b9b5a47578f881040125cf1aff79f2b707538aa7a270f6f3d339f744d073fc8\"" Sep 13 10:22:00.455294 containerd[1578]: time="2025-09-13T10:22:00.455266440Z" level=info msg="connecting to shim 9b9b5a47578f881040125cf1aff79f2b707538aa7a270f6f3d339f744d073fc8" address="unix:///run/containerd/s/3a4ee12ceac05f0d0bf288d3d53470785702b88bfd94f4456104080ba1e46333" protocol=ttrpc version=3 Sep 13 10:22:00.478642 systemd[1]: Started cri-containerd-9b9b5a47578f881040125cf1aff79f2b707538aa7a270f6f3d339f744d073fc8.scope - libcontainer container 9b9b5a47578f881040125cf1aff79f2b707538aa7a270f6f3d339f744d073fc8. Sep 13 10:22:00.507239 containerd[1578]: time="2025-09-13T10:22:00.507199660Z" level=info msg="StartContainer for \"9b9b5a47578f881040125cf1aff79f2b707538aa7a270f6f3d339f744d073fc8\" returns successfully" Sep 13 10:22:00.513912 systemd[1]: cri-containerd-9b9b5a47578f881040125cf1aff79f2b707538aa7a270f6f3d339f744d073fc8.scope: Deactivated successfully. Sep 13 10:22:00.514416 containerd[1578]: time="2025-09-13T10:22:00.514383591Z" level=info msg="received exit event container_id:\"9b9b5a47578f881040125cf1aff79f2b707538aa7a270f6f3d339f744d073fc8\" id:\"9b9b5a47578f881040125cf1aff79f2b707538aa7a270f6f3d339f744d073fc8\" pid:4649 exited_at:{seconds:1757758920 nanos:514070010}" Sep 13 10:22:00.514505 containerd[1578]: time="2025-09-13T10:22:00.514469256Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9b9b5a47578f881040125cf1aff79f2b707538aa7a270f6f3d339f744d073fc8\" id:\"9b9b5a47578f881040125cf1aff79f2b707538aa7a270f6f3d339f744d073fc8\" pid:4649 exited_at:{seconds:1757758920 nanos:514070010}" Sep 13 10:22:00.535462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b9b5a47578f881040125cf1aff79f2b707538aa7a270f6f3d339f744d073fc8-rootfs.mount: Deactivated successfully. Sep 13 10:22:01.458598 containerd[1578]: time="2025-09-13T10:22:01.458522343Z" level=info msg="CreateContainer within sandbox \"f3f209f8dd2ff7f9641182df16805a139bb8ba99fc3a7f30fe5aed1eee21d572\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 10:22:01.497531 containerd[1578]: time="2025-09-13T10:22:01.496699805Z" level=info msg="Container 729092dbe568b779308c3d528a920dea003d5fc571e8881f64135761f54d0a57: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:22:01.512482 containerd[1578]: time="2025-09-13T10:22:01.512417858Z" level=info msg="CreateContainer within sandbox \"f3f209f8dd2ff7f9641182df16805a139bb8ba99fc3a7f30fe5aed1eee21d572\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"729092dbe568b779308c3d528a920dea003d5fc571e8881f64135761f54d0a57\"" Sep 13 10:22:01.513142 containerd[1578]: time="2025-09-13T10:22:01.513082932Z" level=info msg="StartContainer for \"729092dbe568b779308c3d528a920dea003d5fc571e8881f64135761f54d0a57\"" Sep 13 10:22:01.514655 containerd[1578]: time="2025-09-13T10:22:01.514624866Z" level=info msg="connecting to shim 729092dbe568b779308c3d528a920dea003d5fc571e8881f64135761f54d0a57" address="unix:///run/containerd/s/3a4ee12ceac05f0d0bf288d3d53470785702b88bfd94f4456104080ba1e46333" protocol=ttrpc version=3 Sep 13 10:22:01.549683 systemd[1]: Started cri-containerd-729092dbe568b779308c3d528a920dea003d5fc571e8881f64135761f54d0a57.scope - libcontainer container 729092dbe568b779308c3d528a920dea003d5fc571e8881f64135761f54d0a57. Sep 13 10:22:01.593249 systemd[1]: cri-containerd-729092dbe568b779308c3d528a920dea003d5fc571e8881f64135761f54d0a57.scope: Deactivated successfully. Sep 13 10:22:01.594288 containerd[1578]: time="2025-09-13T10:22:01.594243422Z" level=info msg="received exit event container_id:\"729092dbe568b779308c3d528a920dea003d5fc571e8881f64135761f54d0a57\" id:\"729092dbe568b779308c3d528a920dea003d5fc571e8881f64135761f54d0a57\" pid:4695 exited_at:{seconds:1757758921 nanos:593929249}" Sep 13 10:22:01.594475 containerd[1578]: time="2025-09-13T10:22:01.594453764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"729092dbe568b779308c3d528a920dea003d5fc571e8881f64135761f54d0a57\" id:\"729092dbe568b779308c3d528a920dea003d5fc571e8881f64135761f54d0a57\" pid:4695 exited_at:{seconds:1757758921 nanos:593929249}" Sep 13 10:22:01.603343 containerd[1578]: time="2025-09-13T10:22:01.603305723Z" level=info msg="StartContainer for \"729092dbe568b779308c3d528a920dea003d5fc571e8881f64135761f54d0a57\" returns successfully" Sep 13 10:22:01.617363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-729092dbe568b779308c3d528a920dea003d5fc571e8881f64135761f54d0a57-rootfs.mount: Deactivated successfully. Sep 13 10:22:02.365409 kubelet[2739]: I0913 10:22:02.365348 2739 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T10:22:02Z","lastTransitionTime":"2025-09-13T10:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 10:22:02.535177 containerd[1578]: time="2025-09-13T10:22:02.535116033Z" level=info msg="CreateContainer within sandbox \"f3f209f8dd2ff7f9641182df16805a139bb8ba99fc3a7f30fe5aed1eee21d572\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 10:22:02.549723 containerd[1578]: time="2025-09-13T10:22:02.549664270Z" level=info msg="Container 60d29c7ccd4ae4f76d23c8503785a548e7c3de8626f9be6f0772f9fd2a0d80f3: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:22:02.557630 containerd[1578]: time="2025-09-13T10:22:02.557586960Z" level=info msg="CreateContainer within sandbox \"f3f209f8dd2ff7f9641182df16805a139bb8ba99fc3a7f30fe5aed1eee21d572\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"60d29c7ccd4ae4f76d23c8503785a548e7c3de8626f9be6f0772f9fd2a0d80f3\"" Sep 13 10:22:02.558108 containerd[1578]: time="2025-09-13T10:22:02.558077399Z" level=info msg="StartContainer for \"60d29c7ccd4ae4f76d23c8503785a548e7c3de8626f9be6f0772f9fd2a0d80f3\"" Sep 13 10:22:02.558916 containerd[1578]: time="2025-09-13T10:22:02.558888512Z" level=info msg="connecting to shim 60d29c7ccd4ae4f76d23c8503785a548e7c3de8626f9be6f0772f9fd2a0d80f3" address="unix:///run/containerd/s/3a4ee12ceac05f0d0bf288d3d53470785702b88bfd94f4456104080ba1e46333" protocol=ttrpc version=3 Sep 13 10:22:02.582646 systemd[1]: Started cri-containerd-60d29c7ccd4ae4f76d23c8503785a548e7c3de8626f9be6f0772f9fd2a0d80f3.scope - libcontainer container 60d29c7ccd4ae4f76d23c8503785a548e7c3de8626f9be6f0772f9fd2a0d80f3. Sep 13 10:22:02.609813 systemd[1]: cri-containerd-60d29c7ccd4ae4f76d23c8503785a548e7c3de8626f9be6f0772f9fd2a0d80f3.scope: Deactivated successfully. Sep 13 10:22:02.611354 containerd[1578]: time="2025-09-13T10:22:02.611319577Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60d29c7ccd4ae4f76d23c8503785a548e7c3de8626f9be6f0772f9fd2a0d80f3\" id:\"60d29c7ccd4ae4f76d23c8503785a548e7c3de8626f9be6f0772f9fd2a0d80f3\" pid:4734 exited_at:{seconds:1757758922 nanos:611084396}" Sep 13 10:22:02.611512 containerd[1578]: time="2025-09-13T10:22:02.611455457Z" level=info msg="received exit event container_id:\"60d29c7ccd4ae4f76d23c8503785a548e7c3de8626f9be6f0772f9fd2a0d80f3\" id:\"60d29c7ccd4ae4f76d23c8503785a548e7c3de8626f9be6f0772f9fd2a0d80f3\" pid:4734 exited_at:{seconds:1757758922 nanos:611084396}" Sep 13 10:22:02.624518 containerd[1578]: time="2025-09-13T10:22:02.624370166Z" level=info msg="StartContainer for \"60d29c7ccd4ae4f76d23c8503785a548e7c3de8626f9be6f0772f9fd2a0d80f3\" returns successfully" Sep 13 10:22:02.634218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60d29c7ccd4ae4f76d23c8503785a548e7c3de8626f9be6f0772f9fd2a0d80f3-rootfs.mount: Deactivated successfully. Sep 13 10:22:03.476976 containerd[1578]: time="2025-09-13T10:22:03.476922736Z" level=info msg="CreateContainer within sandbox \"f3f209f8dd2ff7f9641182df16805a139bb8ba99fc3a7f30fe5aed1eee21d572\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 10:22:03.485902 containerd[1578]: time="2025-09-13T10:22:03.485844830Z" level=info msg="Container 568c87b78d90361343947b8fa29c2e21410341f446ce60ba31698e83140ee956: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:22:03.494667 containerd[1578]: time="2025-09-13T10:22:03.494613662Z" level=info msg="CreateContainer within sandbox \"f3f209f8dd2ff7f9641182df16805a139bb8ba99fc3a7f30fe5aed1eee21d572\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"568c87b78d90361343947b8fa29c2e21410341f446ce60ba31698e83140ee956\"" Sep 13 10:22:03.495244 containerd[1578]: time="2025-09-13T10:22:03.495191548Z" level=info msg="StartContainer for \"568c87b78d90361343947b8fa29c2e21410341f446ce60ba31698e83140ee956\"" Sep 13 10:22:03.496156 containerd[1578]: time="2025-09-13T10:22:03.496122550Z" level=info msg="connecting to shim 568c87b78d90361343947b8fa29c2e21410341f446ce60ba31698e83140ee956" address="unix:///run/containerd/s/3a4ee12ceac05f0d0bf288d3d53470785702b88bfd94f4456104080ba1e46333" protocol=ttrpc version=3 Sep 13 10:22:03.520666 systemd[1]: Started cri-containerd-568c87b78d90361343947b8fa29c2e21410341f446ce60ba31698e83140ee956.scope - libcontainer container 568c87b78d90361343947b8fa29c2e21410341f446ce60ba31698e83140ee956. Sep 13 10:22:03.569550 containerd[1578]: time="2025-09-13T10:22:03.569472937Z" level=info msg="StartContainer for \"568c87b78d90361343947b8fa29c2e21410341f446ce60ba31698e83140ee956\" returns successfully" Sep 13 10:22:03.683697 containerd[1578]: time="2025-09-13T10:22:03.683636361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"568c87b78d90361343947b8fa29c2e21410341f446ce60ba31698e83140ee956\" id:\"8c8a55f8473280ea75170da93e30cf4bc3aa003edb58456184ad2903223e41be\" pid:4800 exited_at:{seconds:1757758923 nanos:683182813}" Sep 13 10:22:04.101520 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 13 10:22:04.475589 kubelet[2739]: I0913 10:22:04.475521 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xvxxl" podStartSLOduration=5.475487452 podStartE2EDuration="5.475487452s" podCreationTimestamp="2025-09-13 10:21:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:22:04.473247367 +0000 UTC m=+84.362611737" watchObservedRunningTime="2025-09-13 10:22:04.475487452 +0000 UTC m=+84.364851802" Sep 13 10:22:05.766864 containerd[1578]: time="2025-09-13T10:22:05.766805841Z" level=info msg="TaskExit event in podsandbox handler container_id:\"568c87b78d90361343947b8fa29c2e21410341f446ce60ba31698e83140ee956\" id:\"a52faf5cbf6c33cc0fbb273dd76f48bd3fb056e2b6dbddcaac7da7eac8578327\" pid:4943 exit_status:1 exited_at:{seconds:1757758925 nanos:766279164}" Sep 13 10:22:07.217706 systemd-networkd[1490]: lxc_health: Link UP Sep 13 10:22:07.229853 systemd-networkd[1490]: lxc_health: Gained carrier Sep 13 10:22:07.883323 containerd[1578]: time="2025-09-13T10:22:07.883268058Z" level=info msg="TaskExit event in podsandbox handler container_id:\"568c87b78d90361343947b8fa29c2e21410341f446ce60ba31698e83140ee956\" id:\"d4055bad7220cb3b78810f83454a75b8dac854bfee27c33d9925b8709018e3ee\" pid:5332 exited_at:{seconds:1757758927 nanos:882907560}" Sep 13 10:22:08.740827 systemd-networkd[1490]: lxc_health: Gained IPv6LL Sep 13 10:22:09.992056 containerd[1578]: time="2025-09-13T10:22:09.992006263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"568c87b78d90361343947b8fa29c2e21410341f446ce60ba31698e83140ee956\" id:\"baf02cff6ffa5ae1cec5385bba481f4d43efdc7aa29016d5498f6bb9fc7767f3\" pid:5367 exited_at:{seconds:1757758929 nanos:991643040}" Sep 13 10:22:12.206280 containerd[1578]: time="2025-09-13T10:22:12.206214064Z" level=info msg="TaskExit event in podsandbox handler container_id:\"568c87b78d90361343947b8fa29c2e21410341f446ce60ba31698e83140ee956\" id:\"cfffd57cc92d2fe5cb0398d6f52238c55ddbcc744139b7d9c483ff68d1d17c48\" pid:5400 exited_at:{seconds:1757758932 nanos:205555980}" Sep 13 10:22:14.312036 containerd[1578]: time="2025-09-13T10:22:14.311972020Z" level=info msg="TaskExit event in podsandbox handler container_id:\"568c87b78d90361343947b8fa29c2e21410341f446ce60ba31698e83140ee956\" id:\"3525b800dfd2a7ce86fb92c9a619042e19d2206d1ef62fde986945273d59d99b\" pid:5424 exited_at:{seconds:1757758934 nanos:311428515}" Sep 13 10:22:14.328156 sshd[4539]: Connection closed by 10.0.0.1 port 53056 Sep 13 10:22:14.328718 sshd-session[4532]: pam_unix(sshd:session): session closed for user core Sep 13 10:22:14.333906 systemd[1]: sshd@25-10.0.0.73:22-10.0.0.1:53056.service: Deactivated successfully. Sep 13 10:22:14.336629 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 10:22:14.337640 systemd-logind[1558]: Session 26 logged out. Waiting for processes to exit. Sep 13 10:22:14.339464 systemd-logind[1558]: Removed session 26.