Sep 9 00:35:33.845096 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:16:40 -00 2025 Sep 9 00:35:33.845127 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 00:35:33.845138 kernel: BIOS-provided physical RAM map: Sep 9 00:35:33.845145 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 00:35:33.845152 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 00:35:33.845158 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 00:35:33.845166 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 00:35:33.845173 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 00:35:33.845184 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 9 00:35:33.845190 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 9 00:35:33.845197 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 9 00:35:33.845204 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 9 00:35:33.845210 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 9 00:35:33.845217 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 9 00:35:33.845227 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 9 00:35:33.845235 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 00:35:33.845244 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 9 00:35:33.845251 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 9 00:35:33.845258 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 9 00:35:33.845265 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 9 00:35:33.845272 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 9 00:35:33.845279 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 00:35:33.845286 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 00:35:33.845293 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 00:35:33.845300 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 9 00:35:33.845310 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 00:35:33.845317 kernel: NX (Execute Disable) protection: active Sep 9 00:35:33.845324 kernel: APIC: Static calls initialized Sep 9 00:35:33.845331 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 9 00:35:33.845338 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 9 00:35:33.845345 kernel: extended physical RAM map: Sep 9 00:35:33.845352 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 00:35:33.845359 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 00:35:33.845366 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 00:35:33.845374 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 00:35:33.845381 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 00:35:33.845390 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 9 00:35:33.845397 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 9 00:35:33.845404 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 9 00:35:33.845412 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 9 00:35:33.845422 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 9 00:35:33.845429 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 9 00:35:33.845438 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 9 00:35:33.845446 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 9 00:35:33.845453 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 9 00:35:33.845461 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 9 00:35:33.845468 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 9 00:35:33.845475 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 00:35:33.845483 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 9 00:35:33.845490 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 9 00:35:33.845497 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 9 00:35:33.845507 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 9 00:35:33.845514 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 9 00:35:33.845521 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 00:35:33.845529 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 00:35:33.845536 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 00:35:33.845543 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 9 00:35:33.845550 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 00:35:33.845560 kernel: efi: EFI v2.7 by EDK II Sep 9 00:35:33.845568 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 9 00:35:33.845575 kernel: random: crng init done Sep 9 00:35:33.845585 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 9 00:35:33.845592 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 9 00:35:33.845604 kernel: secureboot: Secure boot disabled Sep 9 00:35:33.845612 kernel: SMBIOS 2.8 present. Sep 9 00:35:33.845619 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 9 00:35:33.845626 kernel: DMI: Memory slots populated: 1/1 Sep 9 00:35:33.845634 kernel: Hypervisor detected: KVM Sep 9 00:35:33.845665 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 00:35:33.845672 kernel: kvm-clock: using sched offset of 5446660226 cycles Sep 9 00:35:33.845680 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 00:35:33.845688 kernel: tsc: Detected 2794.748 MHz processor Sep 9 00:35:33.845695 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 00:35:33.845703 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 00:35:33.845720 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 9 00:35:33.845728 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 9 00:35:33.845735 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 00:35:33.845743 kernel: Using GB pages for direct mapping Sep 9 00:35:33.845750 kernel: ACPI: Early table checksum verification disabled Sep 9 00:35:33.845758 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 9 00:35:33.845765 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:35:33.845773 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:33.845781 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:33.845791 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 9 00:35:33.845799 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:33.845806 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:33.845814 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:33.845821 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:33.845829 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 9 00:35:33.845836 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 9 00:35:33.845844 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 9 00:35:33.845854 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 9 00:35:33.845861 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 9 00:35:33.845869 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 9 00:35:33.845876 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 9 00:35:33.845884 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 9 00:35:33.845891 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 9 00:35:33.845898 kernel: No NUMA configuration found Sep 9 00:35:33.845906 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 9 00:35:33.845913 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 9 00:35:33.845921 kernel: Zone ranges: Sep 9 00:35:33.845930 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 00:35:33.845938 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 9 00:35:33.845945 kernel: Normal empty Sep 9 00:35:33.845953 kernel: Device empty Sep 9 00:35:33.845960 kernel: Movable zone start for each node Sep 9 00:35:33.845967 kernel: Early memory node ranges Sep 9 00:35:33.845975 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 9 00:35:33.845982 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 9 00:35:33.845992 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 9 00:35:33.846002 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 9 00:35:33.846009 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 9 00:35:33.846017 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 9 00:35:33.846024 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 9 00:35:33.846031 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 9 00:35:33.846039 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 9 00:35:33.846048 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:35:33.846056 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 9 00:35:33.846073 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 9 00:35:33.846081 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:35:33.846088 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 9 00:35:33.846096 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 9 00:35:33.846106 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 9 00:35:33.846114 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 9 00:35:33.846122 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 9 00:35:33.846130 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 00:35:33.846137 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 00:35:33.846147 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 00:35:33.846155 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 00:35:33.846163 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 00:35:33.846171 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 00:35:33.846179 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 00:35:33.846186 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 00:35:33.846194 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 00:35:33.846202 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 00:35:33.846210 kernel: TSC deadline timer available Sep 9 00:35:33.846220 kernel: CPU topo: Max. logical packages: 1 Sep 9 00:35:33.846227 kernel: CPU topo: Max. logical dies: 1 Sep 9 00:35:33.846235 kernel: CPU topo: Max. dies per package: 1 Sep 9 00:35:33.846243 kernel: CPU topo: Max. threads per core: 1 Sep 9 00:35:33.846250 kernel: CPU topo: Num. cores per package: 4 Sep 9 00:35:33.846258 kernel: CPU topo: Num. threads per package: 4 Sep 9 00:35:33.846266 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 9 00:35:33.846273 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 00:35:33.846282 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 00:35:33.846292 kernel: kvm-guest: setup PV sched yield Sep 9 00:35:33.846305 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 9 00:35:33.846316 kernel: Booting paravirtualized kernel on KVM Sep 9 00:35:33.846327 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 00:35:33.846338 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 00:35:33.846348 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 9 00:35:33.846359 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 9 00:35:33.846368 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 00:35:33.846375 kernel: kvm-guest: PV spinlocks enabled Sep 9 00:35:33.846383 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 00:35:33.846395 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 00:35:33.846407 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:35:33.846415 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:35:33.846424 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:35:33.846433 kernel: Fallback order for Node 0: 0 Sep 9 00:35:33.846442 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 9 00:35:33.846451 kernel: Policy zone: DMA32 Sep 9 00:35:33.846459 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:35:33.846470 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:35:33.846478 kernel: ftrace: allocating 40099 entries in 157 pages Sep 9 00:35:33.846486 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 00:35:33.846493 kernel: Dynamic Preempt: voluntary Sep 9 00:35:33.846501 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:35:33.846510 kernel: rcu: RCU event tracing is enabled. Sep 9 00:35:33.846518 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:35:33.846526 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:35:33.846533 kernel: Rude variant of Tasks RCU enabled. Sep 9 00:35:33.846544 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:35:33.846552 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:35:33.846562 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:35:33.846570 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:35:33.846578 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:35:33.846586 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:35:33.846593 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 00:35:33.846601 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 00:35:33.846609 kernel: Console: colour dummy device 80x25 Sep 9 00:35:33.846620 kernel: printk: legacy console [ttyS0] enabled Sep 9 00:35:33.846627 kernel: ACPI: Core revision 20240827 Sep 9 00:35:33.846650 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 00:35:33.846658 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 00:35:33.846666 kernel: x2apic enabled Sep 9 00:35:33.846674 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 00:35:33.846681 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 00:35:33.846689 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 00:35:33.846697 kernel: kvm-guest: setup PV IPIs Sep 9 00:35:33.846708 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 00:35:33.846722 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 00:35:33.846731 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 9 00:35:33.846738 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 00:35:33.846746 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 00:35:33.846754 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 00:35:33.846762 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 00:35:33.846770 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 00:35:33.846778 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 00:35:33.846788 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 00:35:33.846796 kernel: active return thunk: retbleed_return_thunk Sep 9 00:35:33.846804 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 00:35:33.846816 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 00:35:33.846824 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 00:35:33.846835 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 00:35:33.846854 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 00:35:33.846862 kernel: active return thunk: srso_return_thunk Sep 9 00:35:33.846874 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 00:35:33.846882 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 00:35:33.846889 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 00:35:33.846897 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 00:35:33.846905 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 00:35:33.846913 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 00:35:33.846921 kernel: Freeing SMP alternatives memory: 32K Sep 9 00:35:33.846938 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:35:33.846947 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 00:35:33.846960 kernel: landlock: Up and running. Sep 9 00:35:33.846967 kernel: SELinux: Initializing. Sep 9 00:35:33.846975 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:35:33.846983 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:35:33.846991 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 00:35:33.846999 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 00:35:33.847007 kernel: ... version: 0 Sep 9 00:35:33.847015 kernel: ... bit width: 48 Sep 9 00:35:33.847027 kernel: ... generic registers: 6 Sep 9 00:35:33.847043 kernel: ... value mask: 0000ffffffffffff Sep 9 00:35:33.847062 kernel: ... max period: 00007fffffffffff Sep 9 00:35:33.847071 kernel: ... fixed-purpose events: 0 Sep 9 00:35:33.847079 kernel: ... event mask: 000000000000003f Sep 9 00:35:33.847086 kernel: signal: max sigframe size: 1776 Sep 9 00:35:33.847094 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:35:33.847105 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:35:33.847113 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 00:35:33.847120 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:35:33.847131 kernel: smpboot: x86: Booting SMP configuration: Sep 9 00:35:33.847139 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 00:35:33.847147 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:35:33.847154 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 9 00:35:33.847163 kernel: Memory: 2424720K/2565800K available (14336K kernel code, 2428K rwdata, 9956K rodata, 53832K init, 1088K bss, 135148K reserved, 0K cma-reserved) Sep 9 00:35:33.847170 kernel: devtmpfs: initialized Sep 9 00:35:33.847178 kernel: x86/mm: Memory block size: 128MB Sep 9 00:35:33.847186 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 9 00:35:33.847194 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 9 00:35:33.847204 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 9 00:35:33.847212 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 9 00:35:33.847220 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 9 00:35:33.847227 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 9 00:35:33.847235 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:35:33.847246 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:35:33.847254 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:35:33.847262 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:35:33.847269 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:35:33.847279 kernel: audit: type=2000 audit(1757378130.818:1): state=initialized audit_enabled=0 res=1 Sep 9 00:35:33.847287 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:35:33.847295 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 00:35:33.847303 kernel: cpuidle: using governor menu Sep 9 00:35:33.847310 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:35:33.847318 kernel: dca service started, version 1.12.1 Sep 9 00:35:33.847326 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 9 00:35:33.847334 kernel: PCI: Using configuration type 1 for base access Sep 9 00:35:33.847342 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 00:35:33.847352 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:35:33.847359 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:35:33.847367 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:35:33.847375 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:35:33.847383 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:35:33.847390 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:35:33.847398 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:35:33.847406 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:35:33.847413 kernel: ACPI: Interpreter enabled Sep 9 00:35:33.847423 kernel: ACPI: PM: (supports S0 S3 S5) Sep 9 00:35:33.847431 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 00:35:33.847439 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 00:35:33.847449 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 00:35:33.847459 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 00:35:33.847469 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:35:33.847797 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:35:33.847938 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 00:35:33.848060 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 00:35:33.848071 kernel: PCI host bridge to bus 0000:00 Sep 9 00:35:33.848203 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 00:35:33.848317 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 00:35:33.848438 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 00:35:33.848550 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 9 00:35:33.848721 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 9 00:35:33.848872 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 9 00:35:33.849066 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:35:33.849445 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 9 00:35:33.849619 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 9 00:35:33.849781 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 9 00:35:33.849905 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 9 00:35:33.850035 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 9 00:35:33.850156 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 00:35:33.850298 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 00:35:33.850426 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 9 00:35:33.850547 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 9 00:35:33.851757 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 9 00:35:33.851935 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 00:35:33.852069 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 9 00:35:33.852190 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 9 00:35:33.852313 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 9 00:35:33.852468 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 00:35:33.852593 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 9 00:35:33.852740 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 9 00:35:33.852869 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 9 00:35:33.852988 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 9 00:35:33.853127 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 9 00:35:33.853249 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 00:35:33.853403 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 9 00:35:33.853528 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 9 00:35:33.853664 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 9 00:35:33.853894 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 9 00:35:33.854017 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 9 00:35:33.854028 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 00:35:33.854037 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 00:35:33.854045 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 00:35:33.854053 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 00:35:33.854060 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 00:35:33.854068 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 00:35:33.854081 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 00:35:33.854088 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 00:35:33.854096 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 00:35:33.854104 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 00:35:33.854112 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 00:35:33.854120 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 00:35:33.854128 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 00:35:33.854135 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 00:35:33.854143 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 00:35:33.854153 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 00:35:33.854161 kernel: iommu: Default domain type: Translated Sep 9 00:35:33.854169 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 00:35:33.854177 kernel: efivars: Registered efivars operations Sep 9 00:35:33.854184 kernel: PCI: Using ACPI for IRQ routing Sep 9 00:35:33.854192 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 00:35:33.854200 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 9 00:35:33.854207 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 9 00:35:33.854215 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 9 00:35:33.854225 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 9 00:35:33.854232 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 9 00:35:33.854240 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 9 00:35:33.854248 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 9 00:35:33.854255 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 9 00:35:33.854388 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 00:35:33.854520 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 00:35:33.854658 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 00:35:33.854675 kernel: vgaarb: loaded Sep 9 00:35:33.854683 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 00:35:33.854691 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 00:35:33.854699 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 00:35:33.854707 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:35:33.854723 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:35:33.854732 kernel: pnp: PnP ACPI init Sep 9 00:35:33.854902 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 9 00:35:33.854921 kernel: pnp: PnP ACPI: found 6 devices Sep 9 00:35:33.854929 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 00:35:33.854938 kernel: NET: Registered PF_INET protocol family Sep 9 00:35:33.854948 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:35:33.854956 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:35:33.854965 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:35:33.854973 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:35:33.854981 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 00:35:33.854989 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:35:33.854999 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:35:33.855007 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:35:33.855016 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:35:33.855024 kernel: NET: Registered PF_XDP protocol family Sep 9 00:35:33.855149 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 9 00:35:33.855271 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 9 00:35:33.855402 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 00:35:33.855515 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 00:35:33.855630 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 00:35:33.855768 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 9 00:35:33.855877 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 9 00:35:33.855985 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 9 00:35:33.855996 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:35:33.856004 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 00:35:33.856013 kernel: Initialise system trusted keyrings Sep 9 00:35:33.856025 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:35:33.856033 kernel: Key type asymmetric registered Sep 9 00:35:33.856041 kernel: Asymmetric key parser 'x509' registered Sep 9 00:35:33.856050 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 00:35:33.856058 kernel: io scheduler mq-deadline registered Sep 9 00:35:33.856066 kernel: io scheduler kyber registered Sep 9 00:35:33.856074 kernel: io scheduler bfq registered Sep 9 00:35:33.856085 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 00:35:33.856093 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 00:35:33.856101 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 00:35:33.856110 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 00:35:33.856118 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:35:33.856126 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 00:35:33.856134 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 00:35:33.856142 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 00:35:33.856150 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 00:35:33.856160 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 00:35:33.856298 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 00:35:33.856430 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 00:35:33.856554 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T00:35:33 UTC (1757378133) Sep 9 00:35:33.856869 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 9 00:35:33.856882 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 00:35:33.856890 kernel: efifb: probing for efifb Sep 9 00:35:33.856899 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 9 00:35:33.856912 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 9 00:35:33.856920 kernel: efifb: scrolling: redraw Sep 9 00:35:33.856928 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 9 00:35:33.856937 kernel: Console: switching to colour frame buffer device 160x50 Sep 9 00:35:33.856945 kernel: fb0: EFI VGA frame buffer device Sep 9 00:35:33.856953 kernel: pstore: Using crash dump compression: deflate Sep 9 00:35:33.856962 kernel: pstore: Registered efi_pstore as persistent store backend Sep 9 00:35:33.856970 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:35:33.856978 kernel: Segment Routing with IPv6 Sep 9 00:35:33.856989 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:35:33.856997 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:35:33.857005 kernel: Key type dns_resolver registered Sep 9 00:35:33.857013 kernel: IPI shorthand broadcast: enabled Sep 9 00:35:33.857021 kernel: sched_clock: Marking stable (3683002270, 159674098)->(3861277083, -18600715) Sep 9 00:35:33.857030 kernel: registered taskstats version 1 Sep 9 00:35:33.857038 kernel: Loading compiled-in X.509 certificates Sep 9 00:35:33.857046 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 08d0986253b18b7fd74c2cc5404da4ba92260e75' Sep 9 00:35:33.857055 kernel: Demotion targets for Node 0: null Sep 9 00:35:33.857065 kernel: Key type .fscrypt registered Sep 9 00:35:33.857073 kernel: Key type fscrypt-provisioning registered Sep 9 00:35:33.857082 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:35:33.857090 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:35:33.857098 kernel: ima: No architecture policies found Sep 9 00:35:33.857106 kernel: clk: Disabling unused clocks Sep 9 00:35:33.857114 kernel: Warning: unable to open an initial console. Sep 9 00:35:33.857123 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 9 00:35:33.857131 kernel: Write protecting the kernel read-only data: 24576k Sep 9 00:35:33.857142 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 9 00:35:33.857150 kernel: Run /init as init process Sep 9 00:35:33.857158 kernel: with arguments: Sep 9 00:35:33.857166 kernel: /init Sep 9 00:35:33.857174 kernel: with environment: Sep 9 00:35:33.857182 kernel: HOME=/ Sep 9 00:35:33.857189 kernel: TERM=linux Sep 9 00:35:33.857197 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:35:33.857209 systemd[1]: Successfully made /usr/ read-only. Sep 9 00:35:33.857225 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:35:33.857234 systemd[1]: Detected virtualization kvm. Sep 9 00:35:33.857243 systemd[1]: Detected architecture x86-64. Sep 9 00:35:33.857251 systemd[1]: Running in initrd. Sep 9 00:35:33.857259 systemd[1]: No hostname configured, using default hostname. Sep 9 00:35:33.857268 systemd[1]: Hostname set to . Sep 9 00:35:33.857277 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:35:33.857288 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:35:33.857296 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:35:33.857305 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:35:33.857315 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:35:33.857324 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:35:33.857333 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:35:33.857344 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:35:33.857361 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:35:33.857372 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:35:33.857381 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:35:33.857390 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:35:33.857398 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:35:33.857407 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:35:33.857416 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:35:33.857424 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:35:33.857436 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:35:33.857444 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:35:33.857453 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:35:33.857462 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 00:35:33.857471 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:35:33.857479 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:35:33.857488 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:35:33.857497 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:35:33.857506 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:35:33.857517 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:35:33.857525 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:35:33.857535 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 00:35:33.857543 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:35:33.857552 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:35:33.857561 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:35:33.857569 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:35:33.857580 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:35:33.857592 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:35:33.857601 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:35:33.857609 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:35:33.857666 systemd-journald[220]: Collecting audit messages is disabled. Sep 9 00:35:33.857691 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:35:33.857701 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:35:33.857719 systemd-journald[220]: Journal started Sep 9 00:35:33.857744 systemd-journald[220]: Runtime Journal (/run/log/journal/a56674db579c451b8e688c7b0cadeec8) is 6M, max 48.5M, 42.4M free. Sep 9 00:35:33.850668 systemd-modules-load[221]: Inserted module 'overlay' Sep 9 00:35:33.871567 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:35:33.869567 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:35:33.871721 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:35:33.882934 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:35:33.885759 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:35:33.891677 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:35:33.893336 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 9 00:35:33.893749 kernel: Bridge firewalling registered Sep 9 00:35:33.894794 systemd-tmpfiles[236]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 00:35:33.895753 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:35:33.901089 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:35:33.901380 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:35:33.914909 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:35:33.916568 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:35:33.933818 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:35:33.937270 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:35:33.952482 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 00:35:34.007275 systemd-resolved[263]: Positive Trust Anchors: Sep 9 00:35:34.007302 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:35:34.007346 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:35:34.011176 systemd-resolved[263]: Defaulting to hostname 'linux'. Sep 9 00:35:34.014361 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:35:34.018131 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:35:34.088685 kernel: SCSI subsystem initialized Sep 9 00:35:34.097681 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:35:34.108674 kernel: iscsi: registered transport (tcp) Sep 9 00:35:34.134028 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:35:34.134109 kernel: QLogic iSCSI HBA Driver Sep 9 00:35:34.156658 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:35:34.187888 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:35:34.192863 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:35:34.318728 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:35:34.320432 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:35:34.381670 kernel: raid6: avx2x4 gen() 29819 MB/s Sep 9 00:35:34.398661 kernel: raid6: avx2x2 gen() 28404 MB/s Sep 9 00:35:34.415780 kernel: raid6: avx2x1 gen() 24955 MB/s Sep 9 00:35:34.415811 kernel: raid6: using algorithm avx2x4 gen() 29819 MB/s Sep 9 00:35:34.433748 kernel: raid6: .... xor() 6363 MB/s, rmw enabled Sep 9 00:35:34.433779 kernel: raid6: using avx2x2 recovery algorithm Sep 9 00:35:34.459691 kernel: xor: automatically using best checksumming function avx Sep 9 00:35:34.729709 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:35:34.739386 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:35:34.742058 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:35:34.787286 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 9 00:35:34.793219 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:35:34.797720 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:35:34.829675 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Sep 9 00:35:34.864499 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:35:34.866605 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:35:34.966421 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:35:34.970422 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:35:35.010696 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 00:35:35.013318 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:35:35.022026 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:35:35.022080 kernel: GPT:9289727 != 19775487 Sep 9 00:35:35.022092 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:35:35.022102 kernel: GPT:9289727 != 19775487 Sep 9 00:35:35.022112 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:35:35.022122 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:35:35.028674 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 9 00:35:35.038673 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:35:35.038743 kernel: libata version 3.00 loaded. Sep 9 00:35:35.049690 kernel: AES CTR mode by8 optimization enabled Sep 9 00:35:35.056963 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:35:35.060429 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:35:35.073867 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:35:35.078303 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:35:35.079796 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:35:35.093659 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 00:35:35.095104 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:35:35.101449 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 00:35:35.101471 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 9 00:35:35.101653 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 9 00:35:35.101809 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 00:35:35.095227 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:35:35.102188 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:35:35.117358 kernel: scsi host0: ahci Sep 9 00:35:35.118246 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 00:35:35.120335 kernel: scsi host1: ahci Sep 9 00:35:35.122660 kernel: scsi host2: ahci Sep 9 00:35:35.124700 kernel: scsi host3: ahci Sep 9 00:35:35.126697 kernel: scsi host4: ahci Sep 9 00:35:35.127797 kernel: scsi host5: ahci Sep 9 00:35:35.130713 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 9 00:35:35.130772 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 9 00:35:35.130784 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 9 00:35:35.130795 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 9 00:35:35.132685 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 9 00:35:35.132740 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 9 00:35:35.145304 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:35:35.156253 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 00:35:35.165339 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:35:35.174492 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 00:35:35.174666 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 00:35:35.182088 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:35:35.438686 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 00:35:35.438779 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 00:35:35.439689 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 00:35:35.446665 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 00:35:35.446697 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 00:35:35.447683 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 00:35:35.448684 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 00:35:35.448706 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 00:35:35.482854 kernel: ata3.00: applying bridge limits Sep 9 00:35:35.484002 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 00:35:35.484015 kernel: ata3.00: configured for UDMA/100 Sep 9 00:35:35.484689 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 00:35:35.543338 disk-uuid[637]: Primary Header is updated. Sep 9 00:35:35.543338 disk-uuid[637]: Secondary Entries is updated. Sep 9 00:35:35.543338 disk-uuid[637]: Secondary Header is updated. Sep 9 00:35:35.547670 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:35:35.547699 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 00:35:35.549278 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 00:35:35.552661 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:35:35.567696 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 00:35:35.976750 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:35:35.978471 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:35:35.980295 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:35:35.981609 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:35:35.982876 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:35:36.007496 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:35:36.555670 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:35:36.556118 disk-uuid[638]: The operation has completed successfully. Sep 9 00:35:36.584610 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:35:36.584860 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:35:36.628667 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:35:36.659416 sh[666]: Success Sep 9 00:35:36.679601 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:35:36.679713 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:35:36.679730 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 00:35:36.689662 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 9 00:35:36.727755 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:35:36.731831 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:35:36.755690 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:35:36.763665 kernel: BTRFS: device fsid c483a4f4-f0a7-42f4-ac8d-111955dab3a7 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (678) Sep 9 00:35:36.765664 kernel: BTRFS info (device dm-0): first mount of filesystem c483a4f4-f0a7-42f4-ac8d-111955dab3a7 Sep 9 00:35:36.765728 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:35:36.771039 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:35:36.771077 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 00:35:36.772817 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:35:36.773522 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:35:36.775725 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 00:35:36.776705 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:35:36.777699 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:35:36.806717 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (711) Sep 9 00:35:36.808926 kernel: BTRFS info (device vda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:35:36.808993 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:35:36.812173 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:35:36.812217 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:35:36.817678 kernel: BTRFS info (device vda6): last unmount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:35:36.818118 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:35:36.821802 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:35:36.978696 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:35:36.982811 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:35:36.990807 ignition[754]: Ignition 2.21.0 Sep 9 00:35:36.990821 ignition[754]: Stage: fetch-offline Sep 9 00:35:36.990877 ignition[754]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:35:36.990892 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:35:36.990990 ignition[754]: parsed url from cmdline: "" Sep 9 00:35:36.990995 ignition[754]: no config URL provided Sep 9 00:35:36.991018 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:35:36.991028 ignition[754]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:35:36.991061 ignition[754]: op(1): [started] loading QEMU firmware config module Sep 9 00:35:36.991067 ignition[754]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:35:37.004368 ignition[754]: op(1): [finished] loading QEMU firmware config module Sep 9 00:35:37.005532 ignition[754]: QEMU firmware config was not found. Ignoring... Sep 9 00:35:37.044369 systemd-networkd[854]: lo: Link UP Sep 9 00:35:37.044382 systemd-networkd[854]: lo: Gained carrier Sep 9 00:35:37.045986 systemd-networkd[854]: Enumeration completed Sep 9 00:35:37.046072 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:35:37.047136 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:35:37.047141 systemd-networkd[854]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:35:37.048691 systemd[1]: Reached target network.target - Network. Sep 9 00:35:37.048904 systemd-networkd[854]: eth0: Link UP Sep 9 00:35:37.051439 systemd-networkd[854]: eth0: Gained carrier Sep 9 00:35:37.051450 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:35:37.056847 ignition[754]: parsing config with SHA512: cb4191da06ce63b2a1f2af8e050c5b4563a9410d376740362489ba3932db3e634da304c66c5d08997acd5cb9f58d60212c1547ef548aaf85ff1d99348ead4e46 Sep 9 00:35:37.060600 unknown[754]: fetched base config from "system" Sep 9 00:35:37.060616 unknown[754]: fetched user config from "qemu" Sep 9 00:35:37.060996 ignition[754]: fetch-offline: fetch-offline passed Sep 9 00:35:37.061075 ignition[754]: Ignition finished successfully Sep 9 00:35:37.064705 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:35:37.066098 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:35:37.067263 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:35:37.071755 systemd-networkd[854]: eth0: DHCPv4 address 10.0.0.5/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:35:37.115538 ignition[861]: Ignition 2.21.0 Sep 9 00:35:37.115557 ignition[861]: Stage: kargs Sep 9 00:35:37.115862 ignition[861]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:35:37.115888 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:35:37.116942 ignition[861]: kargs: kargs passed Sep 9 00:35:37.116998 ignition[861]: Ignition finished successfully Sep 9 00:35:37.124477 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:35:37.126929 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:35:37.163108 ignition[870]: Ignition 2.21.0 Sep 9 00:35:37.163123 ignition[870]: Stage: disks Sep 9 00:35:37.163286 ignition[870]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:35:37.163298 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:35:37.168401 ignition[870]: disks: disks passed Sep 9 00:35:37.168501 ignition[870]: Ignition finished successfully Sep 9 00:35:37.172077 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:35:37.174169 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:35:37.174296 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:35:37.176402 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:35:37.178628 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:35:37.180506 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:35:37.184062 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:35:37.210664 systemd-fsck[880]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 00:35:37.218992 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:35:37.222001 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:35:37.336708 kernel: EXT4-fs (vda9): mounted filesystem 4b59fff7-9272-4156-91f8-37989d927dc6 r/w with ordered data mode. Quota mode: none. Sep 9 00:35:37.337430 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:35:37.338178 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:35:37.341749 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:35:37.344000 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:35:37.344432 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:35:37.344488 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:35:37.344519 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:35:37.371781 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:35:37.373806 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:35:37.381272 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (889) Sep 9 00:35:37.381311 kernel: BTRFS info (device vda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:35:37.381327 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:35:37.386680 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:35:37.386751 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:35:37.389413 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:35:37.423210 initrd-setup-root[913]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:35:37.431463 initrd-setup-root[920]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:35:37.436004 initrd-setup-root[927]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:35:37.441423 initrd-setup-root[934]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:35:37.540062 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:35:37.541257 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:35:37.543872 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:35:37.563665 kernel: BTRFS info (device vda6): last unmount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:35:37.578154 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:35:37.595653 ignition[1003]: INFO : Ignition 2.21.0 Sep 9 00:35:37.595653 ignition[1003]: INFO : Stage: mount Sep 9 00:35:37.597621 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:35:37.597621 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:35:37.600900 ignition[1003]: INFO : mount: mount passed Sep 9 00:35:37.601778 ignition[1003]: INFO : Ignition finished successfully Sep 9 00:35:37.605084 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:35:37.607203 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:35:37.763666 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:35:37.765331 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:35:37.785692 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1015) Sep 9 00:35:37.785781 kernel: BTRFS info (device vda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:35:37.787520 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:35:37.790891 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:35:37.790913 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:35:37.792705 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:35:37.826475 ignition[1032]: INFO : Ignition 2.21.0 Sep 9 00:35:37.826475 ignition[1032]: INFO : Stage: files Sep 9 00:35:37.828667 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:35:37.828667 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:35:37.832417 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:35:37.833705 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:35:37.833705 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:35:37.838354 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:35:37.840110 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:35:37.840110 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:35:37.839223 unknown[1032]: wrote ssh authorized keys file for user: core Sep 9 00:35:37.844211 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 00:35:37.844211 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 9 00:35:37.895138 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:35:38.087609 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 00:35:38.089985 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:35:38.089985 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:35:38.089985 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:35:38.089985 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:35:38.089985 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:35:38.089985 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:35:38.089985 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:35:38.089985 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:35:38.103979 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:35:38.103979 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:35:38.103979 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 00:35:38.103979 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 00:35:38.103979 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 00:35:38.103979 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 9 00:35:38.671653 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 00:35:38.703920 systemd-networkd[854]: eth0: Gained IPv6LL Sep 9 00:35:39.359225 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 00:35:39.359225 ignition[1032]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 00:35:39.372583 ignition[1032]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:35:39.701069 ignition[1032]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:35:39.701069 ignition[1032]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 00:35:39.701069 ignition[1032]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 9 00:35:39.701069 ignition[1032]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:35:39.708235 ignition[1032]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:35:39.708235 ignition[1032]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 9 00:35:39.708235 ignition[1032]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:35:39.724078 ignition[1032]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:35:39.728779 ignition[1032]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:35:39.730383 ignition[1032]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:35:39.730383 ignition[1032]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:35:39.730383 ignition[1032]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:35:39.730383 ignition[1032]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:35:39.730383 ignition[1032]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:35:39.730383 ignition[1032]: INFO : files: files passed Sep 9 00:35:39.730383 ignition[1032]: INFO : Ignition finished successfully Sep 9 00:35:39.737034 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:35:39.741671 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:35:39.745103 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:35:39.762533 initrd-setup-root-after-ignition[1060]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 00:35:39.825848 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:35:39.825848 initrd-setup-root-after-ignition[1062]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:35:39.770429 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:35:39.833242 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:35:39.827874 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:35:39.833416 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:35:39.835602 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:35:39.835858 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:35:39.981441 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:35:39.981592 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:35:39.982816 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:35:39.986950 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:35:39.987109 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:35:39.990683 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:35:40.017577 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:35:40.047118 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:35:40.072455 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:35:40.072735 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:35:40.075961 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:35:40.077897 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:35:40.078061 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:35:40.080770 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:35:40.081094 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:35:40.081414 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:35:40.081754 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:35:40.087891 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:35:40.089131 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:35:40.089447 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:35:40.089931 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:35:40.090254 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:35:40.090574 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:35:40.091040 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:35:40.091318 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:35:40.091464 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:35:40.103097 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:35:40.105182 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:35:40.106342 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:35:40.108211 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:35:40.108497 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:35:40.108660 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:35:40.170288 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:35:40.171009 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:35:40.172706 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:35:40.173833 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:35:40.178778 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:35:40.181591 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:35:40.181916 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:35:40.182246 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:35:40.182351 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:35:40.185838 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:35:40.185919 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:35:40.186777 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:35:40.186906 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:35:40.188516 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:35:40.188629 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:35:40.191423 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:35:40.192116 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:35:40.192234 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:35:40.196091 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:35:40.198083 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:35:40.199967 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:35:40.204142 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:35:40.205274 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:35:40.213354 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:35:40.214333 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:35:40.232215 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:35:40.271149 ignition[1088]: INFO : Ignition 2.21.0 Sep 9 00:35:40.271149 ignition[1088]: INFO : Stage: umount Sep 9 00:35:40.271149 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:35:40.271149 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:35:40.271149 ignition[1088]: INFO : umount: umount passed Sep 9 00:35:40.271149 ignition[1088]: INFO : Ignition finished successfully Sep 9 00:35:40.272007 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:35:40.272160 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:35:40.274148 systemd[1]: Stopped target network.target - Network. Sep 9 00:35:40.275864 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:35:40.275965 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:35:40.279207 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:35:40.279275 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:35:40.280297 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:35:40.280366 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:35:40.280717 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:35:40.280772 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:35:40.281440 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:35:40.286013 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:35:40.293074 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:35:40.293246 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:35:40.299569 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 00:35:40.299948 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:35:40.300107 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:35:40.305620 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 00:35:40.306866 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 00:35:40.309311 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:35:40.309380 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:35:40.313840 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:35:40.314891 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:35:40.314965 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:35:40.361900 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:35:40.361955 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:35:40.365273 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:35:40.365342 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:35:40.366230 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:35:40.366295 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:35:40.370377 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:35:40.372921 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:35:40.373018 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:35:40.396949 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:35:40.397095 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:35:40.398398 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:35:40.398631 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:35:40.402032 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:35:40.402107 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:35:40.403618 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:35:40.403692 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:35:40.406056 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:35:40.406121 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:35:40.407562 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:35:40.407627 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:35:40.408445 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:35:40.408508 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:35:40.415964 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:35:40.417256 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 00:35:40.417320 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:35:40.420710 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:35:40.420779 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:35:40.423435 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:35:40.423501 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:35:40.428286 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 00:35:40.428361 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 00:35:40.428428 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:35:40.435973 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:35:40.436089 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:35:40.713228 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:35:40.713438 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:35:40.716005 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:35:40.717968 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:35:40.718089 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:35:40.723112 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:35:40.753044 systemd[1]: Switching root. Sep 9 00:35:40.786461 systemd-journald[220]: Journal stopped Sep 9 00:35:42.084304 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 9 00:35:42.084384 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:35:42.084399 kernel: SELinux: policy capability open_perms=1 Sep 9 00:35:42.084411 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:35:42.084422 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:35:42.084435 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:35:42.084447 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:35:42.084458 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:35:42.084483 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:35:42.084495 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 00:35:42.084507 kernel: audit: type=1403 audit(1757378141.213:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:35:42.084520 systemd[1]: Successfully loaded SELinux policy in 50.215ms. Sep 9 00:35:42.084546 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.826ms. Sep 9 00:35:42.084560 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:35:42.084575 systemd[1]: Detected virtualization kvm. Sep 9 00:35:42.084587 systemd[1]: Detected architecture x86-64. Sep 9 00:35:42.084599 systemd[1]: Detected first boot. Sep 9 00:35:42.084618 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:35:42.084630 zram_generator::config[1133]: No configuration found. Sep 9 00:35:42.084657 kernel: Guest personality initialized and is inactive Sep 9 00:35:42.084674 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 00:35:42.084685 kernel: Initialized host personality Sep 9 00:35:42.084699 kernel: NET: Registered PF_VSOCK protocol family Sep 9 00:35:42.084710 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:35:42.084723 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 00:35:42.084735 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:35:42.084747 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 00:35:42.084760 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:35:42.084772 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:35:42.084787 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:35:42.084799 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:35:42.084814 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:35:42.084833 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:35:42.084849 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:35:42.084865 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:35:42.084880 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:35:42.084895 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:35:42.084910 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:35:42.084932 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:35:42.084947 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:35:42.084966 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:35:42.084981 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:35:42.084994 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 00:35:42.085005 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:35:42.085027 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:35:42.085042 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 00:35:42.085058 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 00:35:42.085077 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 00:35:42.085095 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:35:42.085110 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:35:42.085125 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:35:42.085138 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:35:42.085150 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:35:42.085162 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:35:42.085176 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:35:42.085188 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 00:35:42.085202 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:35:42.085215 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:35:42.085227 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:35:42.085239 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:35:42.085251 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:35:42.085263 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:35:42.085276 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:35:42.085288 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:35:42.085300 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:35:42.085315 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:35:42.085327 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:35:42.085340 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:35:42.085352 systemd[1]: Reached target machines.target - Containers. Sep 9 00:35:42.085365 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:35:42.085377 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:35:42.085389 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:35:42.085402 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:35:42.085414 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:35:42.085428 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:35:42.085440 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:35:42.085452 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:35:42.085465 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:35:42.085486 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:35:42.085498 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:35:42.085511 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 00:35:42.085523 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:35:42.085537 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:35:42.085551 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:35:42.085563 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:35:42.085575 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:35:42.085587 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:35:42.085599 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:35:42.085611 kernel: loop: module loaded Sep 9 00:35:42.085626 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 00:35:42.085653 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:35:42.085666 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:35:42.085678 systemd[1]: Stopped verity-setup.service. Sep 9 00:35:42.085690 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:35:42.085706 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:35:42.085719 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:35:42.085731 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:35:42.085743 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:35:42.085755 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:35:42.085767 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:35:42.085779 kernel: fuse: init (API version 7.41) Sep 9 00:35:42.085793 kernel: ACPI: bus type drm_connector registered Sep 9 00:35:42.085905 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:35:42.085917 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:35:42.085929 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:35:42.085941 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:35:42.085976 systemd-journald[1204]: Collecting audit messages is disabled. Sep 9 00:35:42.086002 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:35:42.086014 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:35:42.086029 systemd-journald[1204]: Journal started Sep 9 00:35:42.086051 systemd-journald[1204]: Runtime Journal (/run/log/journal/a56674db579c451b8e688c7b0cadeec8) is 6M, max 48.5M, 42.4M free. Sep 9 00:35:41.809964 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:35:41.836663 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 00:35:41.837241 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:35:42.088676 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:35:42.090587 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:35:42.090859 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:35:42.092387 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:35:42.092683 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:35:42.094190 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:35:42.094418 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:35:42.095794 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:35:42.096014 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:35:42.097511 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:35:42.099038 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:35:42.100777 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:35:42.102441 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 00:35:42.118549 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:35:42.121153 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:35:42.123327 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:35:42.124514 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:35:42.124542 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:35:42.126528 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 00:35:42.129798 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:35:42.131296 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:35:42.136081 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:35:42.140615 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:35:42.141957 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:35:42.144267 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:35:42.145503 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:35:42.148768 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:35:42.151830 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:35:42.156385 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:35:42.159352 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:35:42.195190 systemd-journald[1204]: Time spent on flushing to /var/log/journal/a56674db579c451b8e688c7b0cadeec8 is 18.105ms for 1073 entries. Sep 9 00:35:42.195190 systemd-journald[1204]: System Journal (/var/log/journal/a56674db579c451b8e688c7b0cadeec8) is 8M, max 195.6M, 187.6M free. Sep 9 00:35:42.233830 systemd-journald[1204]: Received client request to flush runtime journal. Sep 9 00:35:42.233882 kernel: loop0: detected capacity change from 0 to 146240 Sep 9 00:35:42.160874 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:35:42.199353 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:35:42.202326 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:35:42.206518 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:35:42.212952 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 00:35:42.225747 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:35:42.235949 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:35:42.247666 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:35:42.252652 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 00:35:42.266895 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:35:42.270079 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:35:42.273662 kernel: loop1: detected capacity change from 0 to 221472 Sep 9 00:35:42.300860 kernel: loop2: detected capacity change from 0 to 113872 Sep 9 00:35:42.316299 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Sep 9 00:35:42.316322 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Sep 9 00:35:42.367261 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:35:42.391803 kernel: loop3: detected capacity change from 0 to 146240 Sep 9 00:35:42.407676 kernel: loop4: detected capacity change from 0 to 221472 Sep 9 00:35:42.418667 kernel: loop5: detected capacity change from 0 to 113872 Sep 9 00:35:42.429772 (sd-merge)[1274]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 00:35:42.430427 (sd-merge)[1274]: Merged extensions into '/usr'. Sep 9 00:35:42.464447 systemd[1]: Reload requested from client PID 1252 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:35:42.464475 systemd[1]: Reloading... Sep 9 00:35:42.584677 zram_generator::config[1300]: No configuration found. Sep 9 00:35:42.703210 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:35:42.712986 ldconfig[1247]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:35:42.787107 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:35:42.787592 systemd[1]: Reloading finished in 322 ms. Sep 9 00:35:42.821422 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:35:42.840678 systemd[1]: Starting ensure-sysext.service... Sep 9 00:35:42.851238 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:35:42.898033 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:35:42.902580 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:35:42.902627 systemd[1]: Reloading... Sep 9 00:35:42.937229 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 00:35:42.939079 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 00:35:42.939511 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:35:42.939880 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:35:42.940877 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:35:42.941227 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Sep 9 00:35:42.941365 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Sep 9 00:35:42.949206 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:35:42.949291 systemd-tmpfiles[1337]: Skipping /boot Sep 9 00:35:42.967376 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:35:42.967520 systemd-tmpfiles[1337]: Skipping /boot Sep 9 00:35:42.975732 zram_generator::config[1368]: No configuration found. Sep 9 00:35:43.160737 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:35:43.263699 systemd[1]: Reloading finished in 360 ms. Sep 9 00:35:43.282589 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:35:43.305385 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:35:43.316354 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:35:43.319215 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:35:43.321844 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:35:43.338563 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:35:43.344956 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:35:43.351709 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:35:43.355934 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:35:43.356138 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:35:43.357724 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:35:43.400725 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:35:43.405488 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:35:43.406845 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:35:43.406965 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:35:43.409980 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:35:43.412451 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:35:43.414666 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:35:43.435076 systemd-udevd[1409]: Using default interface naming scheme 'v255'. Sep 9 00:35:43.439935 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:35:43.478206 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:35:43.480975 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:35:43.482949 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:35:43.483223 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:35:43.485023 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:35:43.485241 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:35:43.497879 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:35:43.504126 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:35:43.504362 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:35:43.505857 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:35:43.509443 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:35:43.512087 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:35:43.521874 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:35:43.523855 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:35:43.523975 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:35:43.526176 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:35:43.527251 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:35:43.527348 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:35:43.528516 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:35:43.530394 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:35:43.530680 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:35:43.536522 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:35:43.558767 augenrules[1470]: No rules Sep 9 00:35:43.542971 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:35:43.544627 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:35:43.569105 systemd[1]: Finished ensure-sysext.service. Sep 9 00:35:43.570479 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:35:43.570790 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:35:43.572355 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:35:43.572675 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:35:43.576236 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:35:43.576519 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:35:43.580970 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:35:43.595972 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:35:43.597753 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:35:43.597827 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:35:43.599827 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:35:43.676533 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 00:35:43.886753 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 00:35:43.909667 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 9 00:35:43.915114 systemd-resolved[1407]: Positive Trust Anchors: Sep 9 00:35:43.915137 systemd-resolved[1407]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:35:43.915171 systemd-resolved[1407]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:35:43.916680 kernel: ACPI: button: Power Button [PWRF] Sep 9 00:35:43.917676 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 9 00:35:43.917963 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 00:35:43.919284 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 00:35:43.924939 systemd-resolved[1407]: Defaulting to hostname 'linux'. Sep 9 00:35:43.927973 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:35:43.929229 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:35:43.970107 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:35:43.976912 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:35:43.985597 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:35:44.008734 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:35:44.009394 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:35:44.015708 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:35:44.052928 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:35:44.055409 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:35:44.055699 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:35:44.076341 systemd-networkd[1488]: lo: Link UP Sep 9 00:35:44.077041 systemd-networkd[1488]: lo: Gained carrier Sep 9 00:35:44.087925 systemd-networkd[1488]: Enumeration completed Sep 9 00:35:44.088435 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:35:44.088439 systemd-networkd[1488]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:35:44.088616 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:35:44.090101 systemd-networkd[1488]: eth0: Link UP Sep 9 00:35:44.090191 systemd[1]: Reached target network.target - Network. Sep 9 00:35:44.090282 systemd-networkd[1488]: eth0: Gained carrier Sep 9 00:35:44.090297 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:35:44.094591 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 00:35:44.100168 kernel: kvm_amd: TSC scaling supported Sep 9 00:35:44.100208 kernel: kvm_amd: Nested Virtualization enabled Sep 9 00:35:44.100238 kernel: kvm_amd: Nested Paging enabled Sep 9 00:35:44.100921 kernel: kvm_amd: LBR virtualization supported Sep 9 00:35:44.101984 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:35:44.105488 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 00:35:44.105660 kernel: kvm_amd: Virtual GIF supported Sep 9 00:35:44.124731 systemd-networkd[1488]: eth0: DHCPv4 address 10.0.0.5/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:35:44.126877 systemd-timesyncd[1489]: Network configuration changed, trying to establish connection. Sep 9 00:35:44.995558 systemd-timesyncd[1489]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:35:44.995684 systemd-timesyncd[1489]: Initial clock synchronization to Tue 2025-09-09 00:35:44.995382 UTC. Sep 9 00:35:44.997978 systemd-resolved[1407]: Clock change detected. Flushing caches. Sep 9 00:35:45.007902 kernel: EDAC MC: Ver: 3.0.0 Sep 9 00:35:45.008145 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:35:45.010211 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 00:35:45.012339 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:35:45.013576 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:35:45.014842 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:35:45.016099 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 00:35:45.017436 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:35:45.018627 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:35:45.020044 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:35:45.021419 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:35:45.021452 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:35:45.022525 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:35:45.024903 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:35:45.027452 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:35:45.031230 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 00:35:45.032644 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 00:35:45.034019 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 00:35:45.039350 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:35:45.040942 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 00:35:45.042774 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:35:45.044544 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:35:45.045509 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:35:45.046476 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:35:45.046505 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:35:45.047593 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:35:45.049791 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:35:45.052022 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:35:45.055235 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:35:45.057314 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:35:45.058332 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:35:45.060508 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 00:35:45.077460 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:35:45.080459 jq[1539]: false Sep 9 00:35:45.080599 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:35:45.083276 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:35:45.085899 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Refreshing passwd entry cache Sep 9 00:35:45.085906 oslogin_cache_refresh[1541]: Refreshing passwd entry cache Sep 9 00:35:45.087125 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:35:45.091166 extend-filesystems[1540]: Found /dev/vda6 Sep 9 00:35:45.092545 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:35:45.096741 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Failure getting users, quitting Sep 9 00:35:45.096741 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:35:45.096741 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Refreshing group entry cache Sep 9 00:35:45.096394 oslogin_cache_refresh[1541]: Failure getting users, quitting Sep 9 00:35:45.096421 oslogin_cache_refresh[1541]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:35:45.096472 oslogin_cache_refresh[1541]: Refreshing group entry cache Sep 9 00:35:45.097968 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:35:45.098472 extend-filesystems[1540]: Found /dev/vda9 Sep 9 00:35:45.098495 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:35:45.100061 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:35:45.100822 extend-filesystems[1540]: Checking size of /dev/vda9 Sep 9 00:35:45.103256 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:35:45.108274 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:35:45.109966 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Failure getting groups, quitting Sep 9 00:35:45.109966 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:35:45.108446 oslogin_cache_refresh[1541]: Failure getting groups, quitting Sep 9 00:35:45.108459 oslogin_cache_refresh[1541]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:35:45.110153 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:35:45.110996 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:35:45.112257 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 00:35:45.112983 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 00:35:45.116953 extend-filesystems[1540]: Resized partition /dev/vda9 Sep 9 00:35:45.116756 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:35:45.122497 jq[1557]: true Sep 9 00:35:45.122669 extend-filesystems[1563]: resize2fs 1.47.2 (1-Jan-2025) Sep 9 00:35:45.125035 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:35:45.117996 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:35:45.135339 jq[1565]: true Sep 9 00:35:45.154928 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:35:45.160900 tar[1562]: linux-amd64/helm Sep 9 00:35:45.164040 (ntainerd)[1566]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:35:45.184231 update_engine[1554]: I20250909 00:35:45.166342 1554 main.cc:92] Flatcar Update Engine starting Sep 9 00:35:45.184584 extend-filesystems[1563]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:35:45.184584 extend-filesystems[1563]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:35:45.184584 extend-filesystems[1563]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:35:45.175529 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:35:45.195048 extend-filesystems[1540]: Resized filesystem in /dev/vda9 Sep 9 00:35:45.175946 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:35:45.183618 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:35:45.184828 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:35:45.216486 bash[1599]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:35:45.222640 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:35:45.225450 dbus-daemon[1537]: [system] SELinux support is enabled Sep 9 00:35:45.243067 update_engine[1554]: I20250909 00:35:45.239634 1554 update_check_scheduler.cc:74] Next update check in 11m58s Sep 9 00:35:45.243448 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:35:45.250836 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:35:45.252005 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:35:45.252041 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:35:45.253401 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:35:45.253425 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:35:45.258742 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:35:45.263290 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:35:45.265822 systemd-logind[1550]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 00:35:45.265864 systemd-logind[1550]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 00:35:45.267236 systemd-logind[1550]: New seat seat0. Sep 9 00:35:45.274552 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:35:45.364248 locksmithd[1602]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:35:45.365368 sshd_keygen[1556]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:35:45.397008 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:35:45.401045 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:35:45.427414 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:35:45.427779 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:35:45.441848 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:35:45.540200 containerd[1566]: time="2025-09-09T00:35:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 00:35:45.541521 containerd[1566]: time="2025-09-09T00:35:45.541427962Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 9 00:35:45.561087 containerd[1566]: time="2025-09-09T00:35:45.560990990Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.36µs" Sep 9 00:35:45.561087 containerd[1566]: time="2025-09-09T00:35:45.561062233Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 00:35:45.561087 containerd[1566]: time="2025-09-09T00:35:45.561093963Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 00:35:45.561587 containerd[1566]: time="2025-09-09T00:35:45.561546692Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 00:35:45.561623 containerd[1566]: time="2025-09-09T00:35:45.561585765Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 00:35:45.561666 containerd[1566]: time="2025-09-09T00:35:45.561641970Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:35:45.561826 containerd[1566]: time="2025-09-09T00:35:45.561795088Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:35:45.561826 containerd[1566]: time="2025-09-09T00:35:45.561814223Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:35:45.562351 containerd[1566]: time="2025-09-09T00:35:45.562291448Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:35:45.562351 containerd[1566]: time="2025-09-09T00:35:45.562332636Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:35:45.562425 containerd[1566]: time="2025-09-09T00:35:45.562367321Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:35:45.562425 containerd[1566]: time="2025-09-09T00:35:45.562385144Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 00:35:45.562593 containerd[1566]: time="2025-09-09T00:35:45.562561455Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 00:35:45.562899 containerd[1566]: time="2025-09-09T00:35:45.562854124Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:35:45.562925 containerd[1566]: time="2025-09-09T00:35:45.562909818Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:35:45.562925 containerd[1566]: time="2025-09-09T00:35:45.562922251Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 00:35:45.563002 containerd[1566]: time="2025-09-09T00:35:45.562976703Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 00:35:45.564685 containerd[1566]: time="2025-09-09T00:35:45.564621017Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 00:35:45.564796 containerd[1566]: time="2025-09-09T00:35:45.564778262Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:35:45.574491 containerd[1566]: time="2025-09-09T00:35:45.574429451Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 00:35:45.574554 containerd[1566]: time="2025-09-09T00:35:45.574521424Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 00:35:45.574554 containerd[1566]: time="2025-09-09T00:35:45.574542664Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 00:35:45.574621 containerd[1566]: time="2025-09-09T00:35:45.574559085Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 00:35:45.574621 containerd[1566]: time="2025-09-09T00:35:45.574579493Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 00:35:45.574621 containerd[1566]: time="2025-09-09T00:35:45.574594892Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 00:35:45.574621 containerd[1566]: time="2025-09-09T00:35:45.574607666Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 00:35:45.574621 containerd[1566]: time="2025-09-09T00:35:45.574620189Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 00:35:45.574769 containerd[1566]: time="2025-09-09T00:35:45.574643924Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 00:35:45.574769 containerd[1566]: time="2025-09-09T00:35:45.574678468Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 00:35:45.574769 containerd[1566]: time="2025-09-09T00:35:45.574690010Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 00:35:45.574769 containerd[1566]: time="2025-09-09T00:35:45.574703235Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 00:35:45.574960 containerd[1566]: time="2025-09-09T00:35:45.574921364Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 00:35:45.574993 containerd[1566]: time="2025-09-09T00:35:45.574972460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 00:35:45.574993 containerd[1566]: time="2025-09-09T00:35:45.574989642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 00:35:45.575045 containerd[1566]: time="2025-09-09T00:35:45.575004810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 00:35:45.575045 containerd[1566]: time="2025-09-09T00:35:45.575018817Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 00:35:45.575082 containerd[1566]: time="2025-09-09T00:35:45.575062629Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 00:35:45.575082 containerd[1566]: time="2025-09-09T00:35:45.575078529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 00:35:45.575133 containerd[1566]: time="2025-09-09T00:35:45.575091663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 00:35:45.575133 containerd[1566]: time="2025-09-09T00:35:45.575111260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 00:35:45.575177 containerd[1566]: time="2025-09-09T00:35:45.575140785Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 00:35:45.575177 containerd[1566]: time="2025-09-09T00:35:45.575153690Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 00:35:45.575278 containerd[1566]: time="2025-09-09T00:35:45.575252255Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 00:35:45.575302 containerd[1566]: time="2025-09-09T00:35:45.575282060Z" level=info msg="Start snapshots syncer" Sep 9 00:35:45.575329 containerd[1566]: time="2025-09-09T00:35:45.575320943Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 00:35:45.575680 containerd[1566]: time="2025-09-09T00:35:45.575612190Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 00:35:45.575977 containerd[1566]: time="2025-09-09T00:35:45.575684906Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 00:35:45.575977 containerd[1566]: time="2025-09-09T00:35:45.575771178Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 00:35:45.575977 containerd[1566]: time="2025-09-09T00:35:45.575926759Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 00:35:45.576159 containerd[1566]: time="2025-09-09T00:35:45.576124951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 00:35:45.576159 containerd[1566]: time="2025-09-09T00:35:45.576148095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 00:35:45.576159 containerd[1566]: time="2025-09-09T00:35:45.576159115Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 00:35:45.576237 containerd[1566]: time="2025-09-09T00:35:45.576172651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 00:35:45.576237 containerd[1566]: time="2025-09-09T00:35:45.576187749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 00:35:45.576237 containerd[1566]: time="2025-09-09T00:35:45.576208868Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 00:35:45.576845 containerd[1566]: time="2025-09-09T00:35:45.576808663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 00:35:45.576896 containerd[1566]: time="2025-09-09T00:35:45.576842918Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 00:35:45.576896 containerd[1566]: time="2025-09-09T00:35:45.576865760Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 00:35:45.576975 containerd[1566]: time="2025-09-09T00:35:45.576952443Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:35:45.576997 containerd[1566]: time="2025-09-09T00:35:45.576976518Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:35:45.576997 containerd[1566]: time="2025-09-09T00:35:45.576991186Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:35:45.577035 containerd[1566]: time="2025-09-09T00:35:45.577004721Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:35:45.577035 containerd[1566]: time="2025-09-09T00:35:45.577016974Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 00:35:45.577084 containerd[1566]: time="2025-09-09T00:35:45.577048463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 00:35:45.577084 containerd[1566]: time="2025-09-09T00:35:45.577075213Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 00:35:45.577153 containerd[1566]: time="2025-09-09T00:35:45.577131148Z" level=info msg="runtime interface created" Sep 9 00:35:45.577153 containerd[1566]: time="2025-09-09T00:35:45.577147409Z" level=info msg="created NRI interface" Sep 9 00:35:45.577196 containerd[1566]: time="2025-09-09T00:35:45.577174529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 00:35:45.577229 containerd[1566]: time="2025-09-09T00:35:45.577192784Z" level=info msg="Connect containerd service" Sep 9 00:35:45.577296 containerd[1566]: time="2025-09-09T00:35:45.577275349Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:35:45.578668 containerd[1566]: time="2025-09-09T00:35:45.578617295Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:35:45.682618 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:35:45.687032 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:35:45.689780 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 00:35:45.728081 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:35:45.859083 containerd[1566]: time="2025-09-09T00:35:45.858039341Z" level=info msg="Start subscribing containerd event" Sep 9 00:35:45.859083 containerd[1566]: time="2025-09-09T00:35:45.858176478Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:35:45.859083 containerd[1566]: time="2025-09-09T00:35:45.858245448Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:35:45.859083 containerd[1566]: time="2025-09-09T00:35:45.858287507Z" level=info msg="Start recovering state" Sep 9 00:35:45.859083 containerd[1566]: time="2025-09-09T00:35:45.858521666Z" level=info msg="Start event monitor" Sep 9 00:35:45.859083 containerd[1566]: time="2025-09-09T00:35:45.858571239Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:35:45.859083 containerd[1566]: time="2025-09-09T00:35:45.858583031Z" level=info msg="Start streaming server" Sep 9 00:35:45.859083 containerd[1566]: time="2025-09-09T00:35:45.858616744Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 00:35:45.859083 containerd[1566]: time="2025-09-09T00:35:45.858639026Z" level=info msg="runtime interface starting up..." Sep 9 00:35:45.859083 containerd[1566]: time="2025-09-09T00:35:45.858649566Z" level=info msg="starting plugins..." Sep 9 00:35:45.859083 containerd[1566]: time="2025-09-09T00:35:45.858672970Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 00:35:45.859083 containerd[1566]: time="2025-09-09T00:35:45.859042282Z" level=info msg="containerd successfully booted in 0.319637s" Sep 9 00:35:45.859260 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:35:46.073422 tar[1562]: linux-amd64/LICENSE Sep 9 00:35:46.073422 tar[1562]: linux-amd64/README.md Sep 9 00:35:46.100975 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:35:46.547224 systemd-networkd[1488]: eth0: Gained IPv6LL Sep 9 00:35:46.551111 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:35:46.552961 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:35:46.555671 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 00:35:46.558355 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:35:46.560869 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:35:46.597186 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:35:46.611624 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:35:46.611987 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 00:35:46.613675 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:35:47.787226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:35:47.788992 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:35:47.790951 systemd[1]: Startup finished in 3.748s (kernel) + 7.576s (initrd) + 5.758s (userspace) = 17.084s. Sep 9 00:35:47.802308 (kubelet)[1671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:35:48.259285 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:35:48.261163 systemd[1]: Started sshd@0-10.0.0.5:22-10.0.0.1:35164.service - OpenSSH per-connection server daemon (10.0.0.1:35164). Sep 9 00:35:48.357828 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 35164 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:35:48.361397 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:35:48.372557 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:35:48.374395 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:35:48.386046 systemd-logind[1550]: New session 1 of user core. Sep 9 00:35:48.410984 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:35:48.415807 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:35:48.535663 (systemd)[1687]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:35:48.538779 kubelet[1671]: E0909 00:35:48.538660 1671 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:35:48.539691 systemd-logind[1550]: New session c1 of user core. Sep 9 00:35:48.543066 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:35:48.543260 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:35:48.543632 systemd[1]: kubelet.service: Consumed 1.759s CPU time, 267.5M memory peak. Sep 9 00:35:48.710601 systemd[1687]: Queued start job for default target default.target. Sep 9 00:35:48.727259 systemd[1687]: Created slice app.slice - User Application Slice. Sep 9 00:35:48.727286 systemd[1687]: Reached target paths.target - Paths. Sep 9 00:35:48.727326 systemd[1687]: Reached target timers.target - Timers. Sep 9 00:35:48.729018 systemd[1687]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:35:48.743513 systemd[1687]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:35:48.743686 systemd[1687]: Reached target sockets.target - Sockets. Sep 9 00:35:48.743739 systemd[1687]: Reached target basic.target - Basic System. Sep 9 00:35:48.743781 systemd[1687]: Reached target default.target - Main User Target. Sep 9 00:35:48.743817 systemd[1687]: Startup finished in 195ms. Sep 9 00:35:48.744404 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:35:48.758018 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:35:48.820705 systemd[1]: Started sshd@1-10.0.0.5:22-10.0.0.1:35178.service - OpenSSH per-connection server daemon (10.0.0.1:35178). Sep 9 00:35:48.881145 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 35178 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:35:48.882824 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:35:48.887601 systemd-logind[1550]: New session 2 of user core. Sep 9 00:35:48.897010 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:35:48.951677 sshd[1701]: Connection closed by 10.0.0.1 port 35178 Sep 9 00:35:48.952079 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Sep 9 00:35:48.965483 systemd[1]: sshd@1-10.0.0.5:22-10.0.0.1:35178.service: Deactivated successfully. Sep 9 00:35:48.967316 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:35:48.968217 systemd-logind[1550]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:35:48.971770 systemd[1]: Started sshd@2-10.0.0.5:22-10.0.0.1:35184.service - OpenSSH per-connection server daemon (10.0.0.1:35184). Sep 9 00:35:48.972434 systemd-logind[1550]: Removed session 2. Sep 9 00:35:49.036914 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 35184 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:35:49.038370 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:35:49.043668 systemd-logind[1550]: New session 3 of user core. Sep 9 00:35:49.062032 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:35:49.112569 sshd[1709]: Connection closed by 10.0.0.1 port 35184 Sep 9 00:35:49.112734 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Sep 9 00:35:49.126656 systemd[1]: sshd@2-10.0.0.5:22-10.0.0.1:35184.service: Deactivated successfully. Sep 9 00:35:49.128915 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:35:49.129758 systemd-logind[1550]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:35:49.133393 systemd[1]: Started sshd@3-10.0.0.5:22-10.0.0.1:35190.service - OpenSSH per-connection server daemon (10.0.0.1:35190). Sep 9 00:35:49.134363 systemd-logind[1550]: Removed session 3. Sep 9 00:35:49.195299 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 35190 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:35:49.196771 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:35:49.202671 systemd-logind[1550]: New session 4 of user core. Sep 9 00:35:49.225042 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:35:49.283550 sshd[1717]: Connection closed by 10.0.0.1 port 35190 Sep 9 00:35:49.283994 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Sep 9 00:35:49.297592 systemd[1]: sshd@3-10.0.0.5:22-10.0.0.1:35190.service: Deactivated successfully. Sep 9 00:35:49.300157 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:35:49.301148 systemd-logind[1550]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:35:49.304999 systemd[1]: Started sshd@4-10.0.0.5:22-10.0.0.1:35194.service - OpenSSH per-connection server daemon (10.0.0.1:35194). Sep 9 00:35:49.305778 systemd-logind[1550]: Removed session 4. Sep 9 00:35:49.367319 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 35194 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:35:49.369651 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:35:49.376534 systemd-logind[1550]: New session 5 of user core. Sep 9 00:35:49.390254 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:35:49.455956 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:35:49.456421 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:35:49.475054 sudo[1726]: pam_unix(sudo:session): session closed for user root Sep 9 00:35:49.477420 sshd[1725]: Connection closed by 10.0.0.1 port 35194 Sep 9 00:35:49.477831 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Sep 9 00:35:49.488079 systemd[1]: sshd@4-10.0.0.5:22-10.0.0.1:35194.service: Deactivated successfully. Sep 9 00:35:49.490429 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:35:49.491408 systemd-logind[1550]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:35:49.496183 systemd[1]: Started sshd@5-10.0.0.5:22-10.0.0.1:35208.service - OpenSSH per-connection server daemon (10.0.0.1:35208). Sep 9 00:35:49.496968 systemd-logind[1550]: Removed session 5. Sep 9 00:35:49.558915 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 35208 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:35:49.560920 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:35:49.566799 systemd-logind[1550]: New session 6 of user core. Sep 9 00:35:49.578231 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:35:49.636967 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:35:49.637378 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:35:49.659715 sudo[1736]: pam_unix(sudo:session): session closed for user root Sep 9 00:35:49.668031 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 00:35:49.668431 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:35:49.680208 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:35:49.742580 augenrules[1758]: No rules Sep 9 00:35:49.745097 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:35:49.745491 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:35:49.746744 sudo[1735]: pam_unix(sudo:session): session closed for user root Sep 9 00:35:49.748240 sshd[1734]: Connection closed by 10.0.0.1 port 35208 Sep 9 00:35:49.748493 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Sep 9 00:35:49.767107 systemd[1]: sshd@5-10.0.0.5:22-10.0.0.1:35208.service: Deactivated successfully. Sep 9 00:35:49.769620 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:35:49.770621 systemd-logind[1550]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:35:49.774550 systemd[1]: Started sshd@6-10.0.0.5:22-10.0.0.1:35214.service - OpenSSH per-connection server daemon (10.0.0.1:35214). Sep 9 00:35:49.775609 systemd-logind[1550]: Removed session 6. Sep 9 00:35:49.828096 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 35214 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:35:49.829590 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:35:49.834855 systemd-logind[1550]: New session 7 of user core. Sep 9 00:35:49.849101 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:35:49.905964 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:35:49.906359 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:35:50.716385 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:35:50.734500 (dockerd)[1791]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:35:51.324177 dockerd[1791]: time="2025-09-09T00:35:51.324073677Z" level=info msg="Starting up" Sep 9 00:35:51.325189 dockerd[1791]: time="2025-09-09T00:35:51.325152520Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 00:35:53.241376 dockerd[1791]: time="2025-09-09T00:35:53.241292466Z" level=info msg="Loading containers: start." Sep 9 00:35:53.567905 kernel: Initializing XFRM netlink socket Sep 9 00:35:54.081395 systemd-networkd[1488]: docker0: Link UP Sep 9 00:35:54.086271 dockerd[1791]: time="2025-09-09T00:35:54.086214918Z" level=info msg="Loading containers: done." Sep 9 00:35:54.101658 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3269567991-merged.mount: Deactivated successfully. Sep 9 00:35:54.103238 dockerd[1791]: time="2025-09-09T00:35:54.103180404Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:35:54.103313 dockerd[1791]: time="2025-09-09T00:35:54.103297574Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 9 00:35:54.103469 dockerd[1791]: time="2025-09-09T00:35:54.103446182Z" level=info msg="Initializing buildkit" Sep 9 00:35:54.135297 dockerd[1791]: time="2025-09-09T00:35:54.135239075Z" level=info msg="Completed buildkit initialization" Sep 9 00:35:54.143513 dockerd[1791]: time="2025-09-09T00:35:54.143452498Z" level=info msg="Daemon has completed initialization" Sep 9 00:35:54.143662 dockerd[1791]: time="2025-09-09T00:35:54.143565941Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:35:54.143745 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:35:54.904194 containerd[1566]: time="2025-09-09T00:35:54.904144818Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 00:35:58.020749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3011284600.mount: Deactivated successfully. Sep 9 00:35:58.793765 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:35:58.795504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:35:59.148719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:35:59.152785 (kubelet)[2062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:35:59.370343 kubelet[2062]: E0909 00:35:59.370215 2062 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:35:59.378175 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:35:59.378642 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:35:59.379386 systemd[1]: kubelet.service: Consumed 289ms CPU time, 108.5M memory peak. Sep 9 00:35:59.817192 containerd[1566]: time="2025-09-09T00:35:59.817090525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:35:59.846507 containerd[1566]: time="2025-09-09T00:35:59.846414999Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 9 00:35:59.861851 containerd[1566]: time="2025-09-09T00:35:59.861800873Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:35:59.881378 containerd[1566]: time="2025-09-09T00:35:59.881326600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:35:59.882395 containerd[1566]: time="2025-09-09T00:35:59.882352524Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 4.978160668s" Sep 9 00:35:59.882395 containerd[1566]: time="2025-09-09T00:35:59.882387650Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 9 00:35:59.883088 containerd[1566]: time="2025-09-09T00:35:59.883051615Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 00:36:02.426290 containerd[1566]: time="2025-09-09T00:36:02.426225244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:02.512737 containerd[1566]: time="2025-09-09T00:36:02.512684045Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 9 00:36:02.644117 containerd[1566]: time="2025-09-09T00:36:02.644062827Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:02.657187 containerd[1566]: time="2025-09-09T00:36:02.657124422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:02.658259 containerd[1566]: time="2025-09-09T00:36:02.658195050Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 2.775112948s" Sep 9 00:36:02.658307 containerd[1566]: time="2025-09-09T00:36:02.658261535Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 9 00:36:02.659003 containerd[1566]: time="2025-09-09T00:36:02.658938464Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 00:36:05.097779 containerd[1566]: time="2025-09-09T00:36:05.097693120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:05.316507 containerd[1566]: time="2025-09-09T00:36:05.316414520Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 9 00:36:06.518846 containerd[1566]: time="2025-09-09T00:36:06.518757362Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:06.784397 containerd[1566]: time="2025-09-09T00:36:06.784243824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:06.785329 containerd[1566]: time="2025-09-09T00:36:06.785297120Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 4.126316757s" Sep 9 00:36:06.785371 containerd[1566]: time="2025-09-09T00:36:06.785329530Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 9 00:36:06.785763 containerd[1566]: time="2025-09-09T00:36:06.785738387Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 00:36:08.775754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1189110832.mount: Deactivated successfully. Sep 9 00:36:09.629639 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:36:09.633161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:36:11.043291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:36:11.059334 (kubelet)[2099]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:36:11.135308 kubelet[2099]: E0909 00:36:11.135224 2099 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:36:11.139552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:36:11.139772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:36:11.140246 systemd[1]: kubelet.service: Consumed 353ms CPU time, 110M memory peak. Sep 9 00:36:11.541413 containerd[1566]: time="2025-09-09T00:36:11.541084659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:11.542232 containerd[1566]: time="2025-09-09T00:36:11.542023860Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 9 00:36:11.543866 containerd[1566]: time="2025-09-09T00:36:11.543794360Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:11.546543 containerd[1566]: time="2025-09-09T00:36:11.546489896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:11.547346 containerd[1566]: time="2025-09-09T00:36:11.547278034Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 4.761501154s" Sep 9 00:36:11.547433 containerd[1566]: time="2025-09-09T00:36:11.547360288Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 9 00:36:11.548579 containerd[1566]: time="2025-09-09T00:36:11.548547735Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 00:36:12.147718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1224077480.mount: Deactivated successfully. Sep 9 00:36:14.101896 containerd[1566]: time="2025-09-09T00:36:14.101751994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:14.102795 containerd[1566]: time="2025-09-09T00:36:14.102742181Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 9 00:36:14.104821 containerd[1566]: time="2025-09-09T00:36:14.104735920Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:14.107885 containerd[1566]: time="2025-09-09T00:36:14.107820445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:14.108924 containerd[1566]: time="2025-09-09T00:36:14.108895561Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.560318551s" Sep 9 00:36:14.108976 containerd[1566]: time="2025-09-09T00:36:14.108927020Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 00:36:14.109502 containerd[1566]: time="2025-09-09T00:36:14.109466401Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:36:15.193028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2430753384.mount: Deactivated successfully. Sep 9 00:36:15.416516 containerd[1566]: time="2025-09-09T00:36:15.416407973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:36:15.441713 containerd[1566]: time="2025-09-09T00:36:15.441628892Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 00:36:15.456067 containerd[1566]: time="2025-09-09T00:36:15.455968043Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:36:15.480302 containerd[1566]: time="2025-09-09T00:36:15.480242366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:36:15.480851 containerd[1566]: time="2025-09-09T00:36:15.480819037Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.371326116s" Sep 9 00:36:15.480924 containerd[1566]: time="2025-09-09T00:36:15.480853011Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 00:36:15.481550 containerd[1566]: time="2025-09-09T00:36:15.481366343Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 00:36:21.154983 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 00:36:21.157259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:36:21.164162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1949887967.mount: Deactivated successfully. Sep 9 00:36:21.370829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:36:21.374855 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:36:21.514218 kubelet[2175]: E0909 00:36:21.514044 2175 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:36:21.518604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:36:21.518860 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:36:21.519287 systemd[1]: kubelet.service: Consumed 251ms CPU time, 111.1M memory peak. Sep 9 00:36:26.539012 containerd[1566]: time="2025-09-09T00:36:26.538907054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:26.716679 containerd[1566]: time="2025-09-09T00:36:26.716570968Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 9 00:36:26.845201 containerd[1566]: time="2025-09-09T00:36:26.845022810Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:26.977762 containerd[1566]: time="2025-09-09T00:36:26.977664036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:26.979150 containerd[1566]: time="2025-09-09T00:36:26.979079639Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 11.497683079s" Sep 9 00:36:26.979150 containerd[1566]: time="2025-09-09T00:36:26.979131107Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 9 00:36:29.381097 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:36:29.381266 systemd[1]: kubelet.service: Consumed 251ms CPU time, 111.1M memory peak. Sep 9 00:36:29.383618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:36:29.411698 systemd[1]: Reload requested from client PID 2263 ('systemctl') (unit session-7.scope)... Sep 9 00:36:29.411729 systemd[1]: Reloading... Sep 9 00:36:29.521987 zram_generator::config[2309]: No configuration found. Sep 9 00:36:30.227691 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:36:30.349516 systemd[1]: Reloading finished in 937 ms. Sep 9 00:36:30.428664 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:36:30.428769 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:36:30.429133 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:36:30.429173 systemd[1]: kubelet.service: Consumed 183ms CPU time, 98.2M memory peak. Sep 9 00:36:30.431011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:36:30.645435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:36:30.649492 (kubelet)[2354]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:36:30.744344 update_engine[1554]: I20250909 00:36:30.744210 1554 update_attempter.cc:509] Updating boot flags... Sep 9 00:36:31.827425 kubelet[2354]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:36:31.827425 kubelet[2354]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 00:36:31.827425 kubelet[2354]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:36:31.827840 kubelet[2354]: I0909 00:36:31.827501 2354 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:36:32.970641 kubelet[2354]: I0909 00:36:32.970593 2354 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 00:36:32.970641 kubelet[2354]: I0909 00:36:32.970628 2354 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:36:32.971228 kubelet[2354]: I0909 00:36:32.970945 2354 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 00:36:32.998150 kubelet[2354]: E0909 00:36:32.998099 2354 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:36:32.998762 kubelet[2354]: I0909 00:36:32.998729 2354 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:36:33.008263 kubelet[2354]: I0909 00:36:33.008159 2354 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:36:33.014810 kubelet[2354]: I0909 00:36:33.014773 2354 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:36:33.014965 kubelet[2354]: I0909 00:36:33.014949 2354 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 00:36:33.015136 kubelet[2354]: I0909 00:36:33.015105 2354 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:36:33.015318 kubelet[2354]: I0909 00:36:33.015135 2354 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:36:33.015451 kubelet[2354]: I0909 00:36:33.015343 2354 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:36:33.015451 kubelet[2354]: I0909 00:36:33.015353 2354 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 00:36:33.015517 kubelet[2354]: I0909 00:36:33.015502 2354 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:36:33.017471 kubelet[2354]: I0909 00:36:33.017445 2354 kubelet.go:408] "Attempting to sync node with API server" Sep 9 00:36:33.017471 kubelet[2354]: I0909 00:36:33.017469 2354 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:36:33.017548 kubelet[2354]: I0909 00:36:33.017502 2354 kubelet.go:314] "Adding apiserver pod source" Sep 9 00:36:33.017548 kubelet[2354]: I0909 00:36:33.017533 2354 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:36:33.020523 kubelet[2354]: W0909 00:36:33.020465 2354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused Sep 9 00:36:33.020577 kubelet[2354]: E0909 00:36:33.020528 2354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:36:33.020604 kubelet[2354]: I0909 00:36:33.020589 2354 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 9 00:36:33.020808 kubelet[2354]: W0909 00:36:33.020758 2354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused Sep 9 00:36:33.020808 kubelet[2354]: E0909 00:36:33.020802 2354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:36:33.021019 kubelet[2354]: I0909 00:36:33.020985 2354 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:36:33.021090 kubelet[2354]: W0909 00:36:33.021064 2354 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:36:33.024545 kubelet[2354]: I0909 00:36:33.022990 2354 server.go:1274] "Started kubelet" Sep 9 00:36:33.024545 kubelet[2354]: I0909 00:36:33.023063 2354 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:36:33.024545 kubelet[2354]: I0909 00:36:33.024043 2354 server.go:449] "Adding debug handlers to kubelet server" Sep 9 00:36:33.024545 kubelet[2354]: I0909 00:36:33.024132 2354 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:36:33.025363 kubelet[2354]: I0909 00:36:33.025335 2354 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:36:33.025502 kubelet[2354]: I0909 00:36:33.025485 2354 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:36:33.028176 kubelet[2354]: E0909 00:36:33.028092 2354 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:36:33.028792 kubelet[2354]: I0909 00:36:33.028762 2354 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:36:33.028858 kubelet[2354]: E0909 00:36:33.027828 2354 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18637627d326d196 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:36:33.022964118 +0000 UTC m=+2.368857494,LastTimestamp:2025-09-09 00:36:33.022964118 +0000 UTC m=+2.368857494,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:36:33.030935 kubelet[2354]: E0909 00:36:33.030252 2354 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:36:33.030935 kubelet[2354]: I0909 00:36:33.030298 2354 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 00:36:33.030935 kubelet[2354]: I0909 00:36:33.030627 2354 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 00:36:33.030935 kubelet[2354]: I0909 00:36:33.030695 2354 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:36:33.031395 kubelet[2354]: W0909 00:36:33.031338 2354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused Sep 9 00:36:33.031448 kubelet[2354]: E0909 00:36:33.031401 2354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:36:33.031739 kubelet[2354]: E0909 00:36:33.031698 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="200ms" Sep 9 00:36:33.031954 kubelet[2354]: I0909 00:36:33.031925 2354 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:36:33.032040 kubelet[2354]: I0909 00:36:33.032014 2354 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:36:33.033263 kubelet[2354]: I0909 00:36:33.033236 2354 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:36:33.047374 kubelet[2354]: I0909 00:36:33.047309 2354 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:36:33.049416 kubelet[2354]: I0909 00:36:33.048762 2354 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:36:33.049416 kubelet[2354]: I0909 00:36:33.048808 2354 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 00:36:33.049416 kubelet[2354]: I0909 00:36:33.048935 2354 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 00:36:33.049416 kubelet[2354]: I0909 00:36:33.049065 2354 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 00:36:33.049416 kubelet[2354]: I0909 00:36:33.049078 2354 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 00:36:33.049416 kubelet[2354]: I0909 00:36:33.049100 2354 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:36:33.049416 kubelet[2354]: E0909 00:36:33.049108 2354 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:36:33.051646 kubelet[2354]: W0909 00:36:33.051591 2354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused Sep 9 00:36:33.051708 kubelet[2354]: E0909 00:36:33.051655 2354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:36:33.131309 kubelet[2354]: E0909 00:36:33.131258 2354 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:36:33.149633 kubelet[2354]: E0909 00:36:33.149575 2354 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:36:33.232181 kubelet[2354]: E0909 00:36:33.231988 2354 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:36:33.232930 kubelet[2354]: E0909 00:36:33.232851 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="400ms" Sep 9 00:36:33.332160 kubelet[2354]: E0909 00:36:33.332118 2354 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:36:33.350033 kubelet[2354]: E0909 00:36:33.349991 2354 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:36:33.414260 kubelet[2354]: I0909 00:36:33.414200 2354 policy_none.go:49] "None policy: Start" Sep 9 00:36:33.415175 kubelet[2354]: I0909 00:36:33.415153 2354 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 00:36:33.415222 kubelet[2354]: I0909 00:36:33.415193 2354 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:36:33.432618 kubelet[2354]: E0909 00:36:33.432578 2354 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:36:33.533502 kubelet[2354]: E0909 00:36:33.533474 2354 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:36:33.634037 kubelet[2354]: E0909 00:36:33.633958 2354 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:36:33.634474 kubelet[2354]: E0909 00:36:33.634411 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="800ms" Sep 9 00:36:33.668938 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 00:36:33.681911 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 00:36:33.702362 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 00:36:33.704319 kubelet[2354]: I0909 00:36:33.704129 2354 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:36:33.704488 kubelet[2354]: I0909 00:36:33.704462 2354 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:36:33.704532 kubelet[2354]: I0909 00:36:33.704485 2354 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:36:33.704972 kubelet[2354]: I0909 00:36:33.704939 2354 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:36:33.706301 kubelet[2354]: E0909 00:36:33.706275 2354 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:36:33.742104 kubelet[2354]: E0909 00:36:33.741977 2354 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18637627d326d196 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:36:33.022964118 +0000 UTC m=+2.368857494,LastTimestamp:2025-09-09 00:36:33.022964118 +0000 UTC m=+2.368857494,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:36:33.761361 systemd[1]: Created slice kubepods-burstable-pod82c77e4e2c76d590eec5df7f06d27b10.slice - libcontainer container kubepods-burstable-pod82c77e4e2c76d590eec5df7f06d27b10.slice. Sep 9 00:36:33.777559 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 9 00:36:33.783029 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 9 00:36:33.806113 kubelet[2354]: I0909 00:36:33.806001 2354 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:36:33.806914 kubelet[2354]: E0909 00:36:33.806865 2354 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Sep 9 00:36:33.835227 kubelet[2354]: I0909 00:36:33.835201 2354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82c77e4e2c76d590eec5df7f06d27b10-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"82c77e4e2c76d590eec5df7f06d27b10\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:36:33.835278 kubelet[2354]: I0909 00:36:33.835226 2354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:33.835325 kubelet[2354]: I0909 00:36:33.835312 2354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:33.835456 kubelet[2354]: I0909 00:36:33.835409 2354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:33.835495 kubelet[2354]: I0909 00:36:33.835457 2354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:33.835517 kubelet[2354]: I0909 00:36:33.835500 2354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:36:33.835564 kubelet[2354]: I0909 00:36:33.835523 2354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82c77e4e2c76d590eec5df7f06d27b10-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"82c77e4e2c76d590eec5df7f06d27b10\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:36:33.835587 kubelet[2354]: I0909 00:36:33.835568 2354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82c77e4e2c76d590eec5df7f06d27b10-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"82c77e4e2c76d590eec5df7f06d27b10\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:36:33.835612 kubelet[2354]: I0909 00:36:33.835590 2354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:33.836206 kubelet[2354]: W0909 00:36:33.836180 2354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused Sep 9 00:36:33.836274 kubelet[2354]: E0909 00:36:33.836215 2354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:36:34.008680 kubelet[2354]: I0909 00:36:34.008645 2354 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:36:34.009174 kubelet[2354]: E0909 00:36:34.009047 2354 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Sep 9 00:36:34.056830 kubelet[2354]: W0909 00:36:34.056700 2354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused Sep 9 00:36:34.056830 kubelet[2354]: E0909 00:36:34.056741 2354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:36:34.076253 kubelet[2354]: E0909 00:36:34.076211 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:34.076861 containerd[1566]: time="2025-09-09T00:36:34.076799633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:82c77e4e2c76d590eec5df7f06d27b10,Namespace:kube-system,Attempt:0,}" Sep 9 00:36:34.081007 kubelet[2354]: E0909 00:36:34.080970 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:34.081444 containerd[1566]: time="2025-09-09T00:36:34.081387510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 9 00:36:34.085578 kubelet[2354]: E0909 00:36:34.085536 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:34.085859 containerd[1566]: time="2025-09-09T00:36:34.085806639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 9 00:36:34.162838 kubelet[2354]: W0909 00:36:34.162753 2354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused Sep 9 00:36:34.162838 kubelet[2354]: E0909 00:36:34.162808 2354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:36:34.411452 kubelet[2354]: I0909 00:36:34.411303 2354 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:36:34.411636 kubelet[2354]: E0909 00:36:34.411596 2354 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Sep 9 00:36:34.435409 kubelet[2354]: E0909 00:36:34.435345 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="1.6s" Sep 9 00:36:34.460915 kubelet[2354]: W0909 00:36:34.460820 2354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused Sep 9 00:36:34.460960 kubelet[2354]: E0909 00:36:34.460924 2354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:36:35.043760 kubelet[2354]: E0909 00:36:35.043703 2354 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:36:35.213432 kubelet[2354]: I0909 00:36:35.213380 2354 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:36:35.223039 kubelet[2354]: E0909 00:36:35.222973 2354 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Sep 9 00:36:35.791434 kubelet[2354]: W0909 00:36:35.791325 2354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused Sep 9 00:36:35.791434 kubelet[2354]: E0909 00:36:35.791413 2354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:36:35.942559 containerd[1566]: time="2025-09-09T00:36:35.942495661Z" level=info msg="connecting to shim 218f2a3d7f316993827c59f8e4438f28eea0f30603bf96270adb824ff3fd4621" address="unix:///run/containerd/s/7f96e5b7482a448f84c9f877cf93adc51778a5ef239f34daf51cd5d2299121b0" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:36:35.946294 containerd[1566]: time="2025-09-09T00:36:35.946224847Z" level=info msg="connecting to shim 0d28c7ee078d15cd15251259c25b24d42d81ad0801e2b472a2daaf513e8ac8d0" address="unix:///run/containerd/s/d5061a8773313cafa2634522e8fac224f2373245d9b76c2f77a095608b1460fa" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:36:35.953553 containerd[1566]: time="2025-09-09T00:36:35.953477388Z" level=info msg="connecting to shim ed1ba977356488cbcc0f4bf15abd64c30bfd751c9474f561a13dff5fbab94cb4" address="unix:///run/containerd/s/2166fd1827809db9b12ecfb0a3ab9f96d09672a699c0ce8272b236bc06887e73" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:36:36.010192 systemd[1]: Started cri-containerd-0d28c7ee078d15cd15251259c25b24d42d81ad0801e2b472a2daaf513e8ac8d0.scope - libcontainer container 0d28c7ee078d15cd15251259c25b24d42d81ad0801e2b472a2daaf513e8ac8d0. Sep 9 00:36:36.014644 systemd[1]: Started cri-containerd-218f2a3d7f316993827c59f8e4438f28eea0f30603bf96270adb824ff3fd4621.scope - libcontainer container 218f2a3d7f316993827c59f8e4438f28eea0f30603bf96270adb824ff3fd4621. Sep 9 00:36:36.023088 systemd[1]: Started cri-containerd-ed1ba977356488cbcc0f4bf15abd64c30bfd751c9474f561a13dff5fbab94cb4.scope - libcontainer container ed1ba977356488cbcc0f4bf15abd64c30bfd751c9474f561a13dff5fbab94cb4. Sep 9 00:36:36.036765 kubelet[2354]: E0909 00:36:36.036195 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="3.2s" Sep 9 00:36:36.123031 containerd[1566]: time="2025-09-09T00:36:36.122845457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d28c7ee078d15cd15251259c25b24d42d81ad0801e2b472a2daaf513e8ac8d0\"" Sep 9 00:36:36.124363 kubelet[2354]: E0909 00:36:36.124331 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:36.126819 containerd[1566]: time="2025-09-09T00:36:36.126770408Z" level=info msg="CreateContainer within sandbox \"0d28c7ee078d15cd15251259c25b24d42d81ad0801e2b472a2daaf513e8ac8d0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:36:36.180178 containerd[1566]: time="2025-09-09T00:36:36.180126505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:82c77e4e2c76d590eec5df7f06d27b10,Namespace:kube-system,Attempt:0,} returns sandbox id \"218f2a3d7f316993827c59f8e4438f28eea0f30603bf96270adb824ff3fd4621\"" Sep 9 00:36:36.181124 kubelet[2354]: E0909 00:36:36.181082 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:36.189270 containerd[1566]: time="2025-09-09T00:36:36.189229852Z" level=info msg="CreateContainer within sandbox \"218f2a3d7f316993827c59f8e4438f28eea0f30603bf96270adb824ff3fd4621\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:36:36.331723 kubelet[2354]: W0909 00:36:36.331584 2354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused Sep 9 00:36:36.331723 kubelet[2354]: E0909 00:36:36.331676 2354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:36:36.373150 containerd[1566]: time="2025-09-09T00:36:36.373004660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed1ba977356488cbcc0f4bf15abd64c30bfd751c9474f561a13dff5fbab94cb4\"" Sep 9 00:36:36.374035 kubelet[2354]: E0909 00:36:36.374007 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:36.375670 containerd[1566]: time="2025-09-09T00:36:36.375636023Z" level=info msg="CreateContainer within sandbox \"ed1ba977356488cbcc0f4bf15abd64c30bfd751c9474f561a13dff5fbab94cb4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:36:36.678732 kubelet[2354]: W0909 00:36:36.678515 2354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused Sep 9 00:36:36.678732 kubelet[2354]: E0909 00:36:36.678633 2354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:36:36.824897 kubelet[2354]: I0909 00:36:36.824843 2354 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:36:36.825383 kubelet[2354]: E0909 00:36:36.825330 2354 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Sep 9 00:36:36.908809 containerd[1566]: time="2025-09-09T00:36:36.908694899Z" level=info msg="Container 29119c0c31297d5eab02a19ac8b2992b6bc48b86a9d1394ab863611c82f11510: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:36:36.912145 containerd[1566]: time="2025-09-09T00:36:36.912089115Z" level=info msg="Container 1d507abdbee681675dd37893e14c922fff09dffa3072761285bed9a582944ce7: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:36:36.916124 containerd[1566]: time="2025-09-09T00:36:36.916084922Z" level=info msg="Container 8eeb3994f2f7632aa34fcab6e6e6fe9cc75ae20f46f42f6a9e1fc0e1d51567ae: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:36:36.924630 containerd[1566]: time="2025-09-09T00:36:36.924579306Z" level=info msg="CreateContainer within sandbox \"ed1ba977356488cbcc0f4bf15abd64c30bfd751c9474f561a13dff5fbab94cb4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1d507abdbee681675dd37893e14c922fff09dffa3072761285bed9a582944ce7\"" Sep 9 00:36:36.926197 containerd[1566]: time="2025-09-09T00:36:36.926153275Z" level=info msg="StartContainer for \"1d507abdbee681675dd37893e14c922fff09dffa3072761285bed9a582944ce7\"" Sep 9 00:36:36.930067 containerd[1566]: time="2025-09-09T00:36:36.928559422Z" level=info msg="connecting to shim 1d507abdbee681675dd37893e14c922fff09dffa3072761285bed9a582944ce7" address="unix:///run/containerd/s/2166fd1827809db9b12ecfb0a3ab9f96d09672a699c0ce8272b236bc06887e73" protocol=ttrpc version=3 Sep 9 00:36:36.932008 containerd[1566]: time="2025-09-09T00:36:36.931981441Z" level=info msg="CreateContainer within sandbox \"0d28c7ee078d15cd15251259c25b24d42d81ad0801e2b472a2daaf513e8ac8d0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"29119c0c31297d5eab02a19ac8b2992b6bc48b86a9d1394ab863611c82f11510\"" Sep 9 00:36:36.933049 containerd[1566]: time="2025-09-09T00:36:36.933017081Z" level=info msg="CreateContainer within sandbox \"218f2a3d7f316993827c59f8e4438f28eea0f30603bf96270adb824ff3fd4621\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8eeb3994f2f7632aa34fcab6e6e6fe9cc75ae20f46f42f6a9e1fc0e1d51567ae\"" Sep 9 00:36:36.933639 containerd[1566]: time="2025-09-09T00:36:36.933590497Z" level=info msg="StartContainer for \"29119c0c31297d5eab02a19ac8b2992b6bc48b86a9d1394ab863611c82f11510\"" Sep 9 00:36:36.935016 containerd[1566]: time="2025-09-09T00:36:36.934025942Z" level=info msg="StartContainer for \"8eeb3994f2f7632aa34fcab6e6e6fe9cc75ae20f46f42f6a9e1fc0e1d51567ae\"" Sep 9 00:36:36.936075 containerd[1566]: time="2025-09-09T00:36:36.936041959Z" level=info msg="connecting to shim 29119c0c31297d5eab02a19ac8b2992b6bc48b86a9d1394ab863611c82f11510" address="unix:///run/containerd/s/d5061a8773313cafa2634522e8fac224f2373245d9b76c2f77a095608b1460fa" protocol=ttrpc version=3 Sep 9 00:36:36.936200 containerd[1566]: time="2025-09-09T00:36:36.936152187Z" level=info msg="connecting to shim 8eeb3994f2f7632aa34fcab6e6e6fe9cc75ae20f46f42f6a9e1fc0e1d51567ae" address="unix:///run/containerd/s/7f96e5b7482a448f84c9f877cf93adc51778a5ef239f34daf51cd5d2299121b0" protocol=ttrpc version=3 Sep 9 00:36:36.967197 systemd[1]: Started cri-containerd-1d507abdbee681675dd37893e14c922fff09dffa3072761285bed9a582944ce7.scope - libcontainer container 1d507abdbee681675dd37893e14c922fff09dffa3072761285bed9a582944ce7. Sep 9 00:36:36.979085 systemd[1]: Started cri-containerd-29119c0c31297d5eab02a19ac8b2992b6bc48b86a9d1394ab863611c82f11510.scope - libcontainer container 29119c0c31297d5eab02a19ac8b2992b6bc48b86a9d1394ab863611c82f11510. Sep 9 00:36:36.980579 systemd[1]: Started cri-containerd-8eeb3994f2f7632aa34fcab6e6e6fe9cc75ae20f46f42f6a9e1fc0e1d51567ae.scope - libcontainer container 8eeb3994f2f7632aa34fcab6e6e6fe9cc75ae20f46f42f6a9e1fc0e1d51567ae. Sep 9 00:36:37.106520 kubelet[2354]: W0909 00:36:37.106436 2354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.5:6443: connect: connection refused Sep 9 00:36:37.106520 kubelet[2354]: E0909 00:36:37.106515 2354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:36:37.237922 containerd[1566]: time="2025-09-09T00:36:37.237498262Z" level=info msg="StartContainer for \"29119c0c31297d5eab02a19ac8b2992b6bc48b86a9d1394ab863611c82f11510\" returns successfully" Sep 9 00:36:37.237922 containerd[1566]: time="2025-09-09T00:36:37.237848144Z" level=info msg="StartContainer for \"8eeb3994f2f7632aa34fcab6e6e6fe9cc75ae20f46f42f6a9e1fc0e1d51567ae\" returns successfully" Sep 9 00:36:37.239931 containerd[1566]: time="2025-09-09T00:36:37.239508807Z" level=info msg="StartContainer for \"1d507abdbee681675dd37893e14c922fff09dffa3072761285bed9a582944ce7\" returns successfully" Sep 9 00:36:37.247530 kubelet[2354]: E0909 00:36:37.247429 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:37.252087 kubelet[2354]: E0909 00:36:37.250635 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:38.261845 kubelet[2354]: E0909 00:36:38.260333 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:38.261845 kubelet[2354]: E0909 00:36:38.261098 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:38.262410 kubelet[2354]: E0909 00:36:38.262336 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:39.589557 kubelet[2354]: E0909 00:36:39.589491 2354 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:36:40.022024 kubelet[2354]: I0909 00:36:40.021969 2354 apiserver.go:52] "Watching apiserver" Sep 9 00:36:40.027553 kubelet[2354]: I0909 00:36:40.027529 2354 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:36:40.031711 kubelet[2354]: I0909 00:36:40.031671 2354 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 00:36:40.118474 kubelet[2354]: E0909 00:36:40.118413 2354 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 9 00:36:40.122967 kubelet[2354]: I0909 00:36:40.122753 2354 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 00:36:40.122967 kubelet[2354]: E0909 00:36:40.122802 2354 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 00:36:41.132960 kubelet[2354]: E0909 00:36:41.132900 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:41.263391 kubelet[2354]: E0909 00:36:41.263342 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:43.221126 kubelet[2354]: E0909 00:36:43.221085 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:43.267083 kubelet[2354]: E0909 00:36:43.267032 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:43.595640 kubelet[2354]: I0909 00:36:43.595536 2354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.595498325 podStartE2EDuration="2.595498325s" podCreationTimestamp="2025-09-09 00:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:36:43.595324286 +0000 UTC m=+12.941217662" watchObservedRunningTime="2025-09-09 00:36:43.595498325 +0000 UTC m=+12.941391701" Sep 9 00:36:47.386848 kubelet[2354]: E0909 00:36:47.386466 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:47.563151 systemd[1]: Reload requested from client PID 2645 ('systemctl') (unit session-7.scope)... Sep 9 00:36:47.563169 systemd[1]: Reloading... Sep 9 00:36:47.665967 zram_generator::config[2691]: No configuration found. Sep 9 00:36:47.764793 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:36:47.897763 systemd[1]: Reloading finished in 334 ms. Sep 9 00:36:47.928290 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:36:47.944459 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:36:47.944793 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:36:47.944853 systemd[1]: kubelet.service: Consumed 1.983s CPU time, 134.6M memory peak. Sep 9 00:36:47.946965 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:36:48.151629 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:36:48.164521 (kubelet)[2733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:36:48.207010 kubelet[2733]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:36:48.207010 kubelet[2733]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 00:36:48.207010 kubelet[2733]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:36:48.207460 kubelet[2733]: I0909 00:36:48.207078 2733 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:36:48.216926 kubelet[2733]: I0909 00:36:48.216851 2733 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 00:36:48.216926 kubelet[2733]: I0909 00:36:48.216907 2733 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:36:48.217219 kubelet[2733]: I0909 00:36:48.217195 2733 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 00:36:48.218590 kubelet[2733]: I0909 00:36:48.218542 2733 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 00:36:48.220771 kubelet[2733]: I0909 00:36:48.220743 2733 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:36:48.225159 kubelet[2733]: I0909 00:36:48.225102 2733 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:36:48.232646 kubelet[2733]: I0909 00:36:48.232602 2733 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:36:48.232764 kubelet[2733]: I0909 00:36:48.232731 2733 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 00:36:48.232923 kubelet[2733]: I0909 00:36:48.232867 2733 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:36:48.233130 kubelet[2733]: I0909 00:36:48.232922 2733 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:36:48.233130 kubelet[2733]: I0909 00:36:48.233107 2733 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:36:48.233130 kubelet[2733]: I0909 00:36:48.233116 2733 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 00:36:48.233130 kubelet[2733]: I0909 00:36:48.233145 2733 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:36:48.233905 kubelet[2733]: I0909 00:36:48.233276 2733 kubelet.go:408] "Attempting to sync node with API server" Sep 9 00:36:48.233905 kubelet[2733]: I0909 00:36:48.233289 2733 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:36:48.233905 kubelet[2733]: I0909 00:36:48.233340 2733 kubelet.go:314] "Adding apiserver pod source" Sep 9 00:36:48.233905 kubelet[2733]: I0909 00:36:48.233357 2733 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:36:48.234364 kubelet[2733]: I0909 00:36:48.234329 2733 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 9 00:36:48.234936 kubelet[2733]: I0909 00:36:48.234899 2733 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:36:48.235906 kubelet[2733]: I0909 00:36:48.235856 2733 server.go:1274] "Started kubelet" Sep 9 00:36:48.236902 kubelet[2733]: I0909 00:36:48.236048 2733 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:36:48.236902 kubelet[2733]: I0909 00:36:48.236636 2733 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:36:48.237335 kubelet[2733]: I0909 00:36:48.237276 2733 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:36:48.242403 kubelet[2733]: I0909 00:36:48.242366 2733 server.go:449] "Adding debug handlers to kubelet server" Sep 9 00:36:48.243190 kubelet[2733]: E0909 00:36:48.243156 2733 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:36:48.248450 kubelet[2733]: I0909 00:36:48.248423 2733 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:36:48.249359 kubelet[2733]: I0909 00:36:48.249333 2733 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:36:48.250287 kubelet[2733]: I0909 00:36:48.250246 2733 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 00:36:48.250374 kubelet[2733]: I0909 00:36:48.250355 2733 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 00:36:48.250529 kubelet[2733]: I0909 00:36:48.250496 2733 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:36:48.251917 kubelet[2733]: I0909 00:36:48.251849 2733 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:36:48.252003 kubelet[2733]: I0909 00:36:48.251964 2733 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:36:48.254912 kubelet[2733]: I0909 00:36:48.254885 2733 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:36:48.262923 kubelet[2733]: I0909 00:36:48.262860 2733 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:36:48.264211 kubelet[2733]: I0909 00:36:48.264189 2733 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:36:48.264211 kubelet[2733]: I0909 00:36:48.264210 2733 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 00:36:48.264306 kubelet[2733]: I0909 00:36:48.264226 2733 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 00:36:48.264306 kubelet[2733]: E0909 00:36:48.264265 2733 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:36:48.297856 kubelet[2733]: I0909 00:36:48.297804 2733 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 00:36:48.297856 kubelet[2733]: I0909 00:36:48.297826 2733 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 00:36:48.297856 kubelet[2733]: I0909 00:36:48.297848 2733 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:36:48.298090 kubelet[2733]: I0909 00:36:48.298023 2733 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:36:48.298090 kubelet[2733]: I0909 00:36:48.298037 2733 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:36:48.298090 kubelet[2733]: I0909 00:36:48.298058 2733 policy_none.go:49] "None policy: Start" Sep 9 00:36:48.298927 kubelet[2733]: I0909 00:36:48.298893 2733 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 00:36:48.298980 kubelet[2733]: I0909 00:36:48.298932 2733 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:36:48.299122 kubelet[2733]: I0909 00:36:48.299104 2733 state_mem.go:75] "Updated machine memory state" Sep 9 00:36:48.304292 kubelet[2733]: I0909 00:36:48.304240 2733 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:36:48.304658 kubelet[2733]: I0909 00:36:48.304470 2733 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:36:48.304658 kubelet[2733]: I0909 00:36:48.304492 2733 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:36:48.305653 kubelet[2733]: I0909 00:36:48.305311 2733 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:36:48.413217 kubelet[2733]: I0909 00:36:48.413070 2733 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:36:48.552131 kubelet[2733]: I0909 00:36:48.552047 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82c77e4e2c76d590eec5df7f06d27b10-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"82c77e4e2c76d590eec5df7f06d27b10\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:36:48.552131 kubelet[2733]: I0909 00:36:48.552126 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:48.552357 kubelet[2733]: I0909 00:36:48.552162 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:48.552357 kubelet[2733]: I0909 00:36:48.552185 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:36:48.552357 kubelet[2733]: I0909 00:36:48.552206 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82c77e4e2c76d590eec5df7f06d27b10-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"82c77e4e2c76d590eec5df7f06d27b10\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:36:48.552357 kubelet[2733]: I0909 00:36:48.552229 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82c77e4e2c76d590eec5df7f06d27b10-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"82c77e4e2c76d590eec5df7f06d27b10\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:36:48.552357 kubelet[2733]: I0909 00:36:48.552252 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:48.552463 kubelet[2733]: I0909 00:36:48.552271 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:48.552463 kubelet[2733]: I0909 00:36:48.552292 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:48.568991 kubelet[2733]: E0909 00:36:48.568889 2733 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:36:48.570066 kubelet[2733]: E0909 00:36:48.570029 2733 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:36:48.570225 kubelet[2733]: E0909 00:36:48.570123 2733 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:36:48.631037 kubelet[2733]: I0909 00:36:48.630992 2733 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 9 00:36:48.631233 kubelet[2733]: I0909 00:36:48.631101 2733 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 00:36:48.870806 kubelet[2733]: E0909 00:36:48.870738 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:48.870806 kubelet[2733]: E0909 00:36:48.870761 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:48.870806 kubelet[2733]: E0909 00:36:48.870761 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:49.234699 kubelet[2733]: I0909 00:36:49.234562 2733 apiserver.go:52] "Watching apiserver" Sep 9 00:36:49.251457 kubelet[2733]: I0909 00:36:49.251425 2733 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 00:36:49.283315 kubelet[2733]: E0909 00:36:49.283277 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:49.283315 kubelet[2733]: E0909 00:36:49.283313 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:49.284353 kubelet[2733]: E0909 00:36:49.284323 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:49.553212 kubelet[2733]: I0909 00:36:49.552719 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.552696921 podStartE2EDuration="2.552696921s" podCreationTimestamp="2025-09-09 00:36:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:36:49.552280648 +0000 UTC m=+1.383162860" watchObservedRunningTime="2025-09-09 00:36:49.552696921 +0000 UTC m=+1.383579123" Sep 9 00:36:50.284645 kubelet[2733]: E0909 00:36:50.284561 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:51.242700 kubelet[2733]: I0909 00:36:51.242657 2733 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:36:51.243238 containerd[1566]: time="2025-09-09T00:36:51.243192763Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:36:51.243614 kubelet[2733]: I0909 00:36:51.243439 2733 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:36:51.285755 kubelet[2733]: E0909 00:36:51.285705 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:51.880818 kubelet[2733]: E0909 00:36:51.880761 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:51.921489 systemd[1]: Created slice kubepods-besteffort-pod48ce6e8b_6063_41d5_bc05_a6c92eebace9.slice - libcontainer container kubepods-besteffort-pod48ce6e8b_6063_41d5_bc05_a6c92eebace9.slice. Sep 9 00:36:51.969891 kubelet[2733]: I0909 00:36:51.969844 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/48ce6e8b-6063-41d5-bc05-a6c92eebace9-kube-proxy\") pod \"kube-proxy-ddwrb\" (UID: \"48ce6e8b-6063-41d5-bc05-a6c92eebace9\") " pod="kube-system/kube-proxy-ddwrb" Sep 9 00:36:51.969995 kubelet[2733]: I0909 00:36:51.969898 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48ce6e8b-6063-41d5-bc05-a6c92eebace9-xtables-lock\") pod \"kube-proxy-ddwrb\" (UID: \"48ce6e8b-6063-41d5-bc05-a6c92eebace9\") " pod="kube-system/kube-proxy-ddwrb" Sep 9 00:36:51.969995 kubelet[2733]: I0909 00:36:51.969915 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48ce6e8b-6063-41d5-bc05-a6c92eebace9-lib-modules\") pod \"kube-proxy-ddwrb\" (UID: \"48ce6e8b-6063-41d5-bc05-a6c92eebace9\") " pod="kube-system/kube-proxy-ddwrb" Sep 9 00:36:51.969995 kubelet[2733]: I0909 00:36:51.969932 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lgqb\" (UniqueName: \"kubernetes.io/projected/48ce6e8b-6063-41d5-bc05-a6c92eebace9-kube-api-access-7lgqb\") pod \"kube-proxy-ddwrb\" (UID: \"48ce6e8b-6063-41d5-bc05-a6c92eebace9\") " pod="kube-system/kube-proxy-ddwrb" Sep 9 00:36:52.115424 kubelet[2733]: E0909 00:36:52.115368 2733 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 9 00:36:52.115424 kubelet[2733]: E0909 00:36:52.115416 2733 projected.go:194] Error preparing data for projected volume kube-api-access-7lgqb for pod kube-system/kube-proxy-ddwrb: configmap "kube-root-ca.crt" not found Sep 9 00:36:52.116045 kubelet[2733]: E0909 00:36:52.115480 2733 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48ce6e8b-6063-41d5-bc05-a6c92eebace9-kube-api-access-7lgqb podName:48ce6e8b-6063-41d5-bc05-a6c92eebace9 nodeName:}" failed. No retries permitted until 2025-09-09 00:36:52.61545577 +0000 UTC m=+4.446337972 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7lgqb" (UniqueName: "kubernetes.io/projected/48ce6e8b-6063-41d5-bc05-a6c92eebace9-kube-api-access-7lgqb") pod "kube-proxy-ddwrb" (UID: "48ce6e8b-6063-41d5-bc05-a6c92eebace9") : configmap "kube-root-ca.crt" not found Sep 9 00:36:52.132165 systemd[1]: Created slice kubepods-besteffort-pod1c844657_1db6_4b2e_b16f_26dd0919f942.slice - libcontainer container kubepods-besteffort-pod1c844657_1db6_4b2e_b16f_26dd0919f942.slice. Sep 9 00:36:52.171232 kubelet[2733]: I0909 00:36:52.171176 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmf84\" (UniqueName: \"kubernetes.io/projected/1c844657-1db6-4b2e-b16f-26dd0919f942-kube-api-access-xmf84\") pod \"tigera-operator-58fc44c59b-kq4nt\" (UID: \"1c844657-1db6-4b2e-b16f-26dd0919f942\") " pod="tigera-operator/tigera-operator-58fc44c59b-kq4nt" Sep 9 00:36:52.171232 kubelet[2733]: I0909 00:36:52.171222 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1c844657-1db6-4b2e-b16f-26dd0919f942-var-lib-calico\") pod \"tigera-operator-58fc44c59b-kq4nt\" (UID: \"1c844657-1db6-4b2e-b16f-26dd0919f942\") " pod="tigera-operator/tigera-operator-58fc44c59b-kq4nt" Sep 9 00:36:52.287284 kubelet[2733]: E0909 00:36:52.287211 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:52.287284 kubelet[2733]: E0909 00:36:52.287227 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:52.435948 containerd[1566]: time="2025-09-09T00:36:52.435791719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-kq4nt,Uid:1c844657-1db6-4b2e-b16f-26dd0919f942,Namespace:tigera-operator,Attempt:0,}" Sep 9 00:36:52.716970 containerd[1566]: time="2025-09-09T00:36:52.716730137Z" level=info msg="connecting to shim 00016632f066dd4fcbb693df0b8c97c909bb3225f984d8dd1fe7ef3887afa34d" address="unix:///run/containerd/s/2d103744d9c9d0aa150c65f1f25e1c80911ce2aaf07b644614ab7701ecda12ff" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:36:52.752141 systemd[1]: Started cri-containerd-00016632f066dd4fcbb693df0b8c97c909bb3225f984d8dd1fe7ef3887afa34d.scope - libcontainer container 00016632f066dd4fcbb693df0b8c97c909bb3225f984d8dd1fe7ef3887afa34d. Sep 9 00:36:52.829731 kubelet[2733]: E0909 00:36:52.829668 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:52.830141 containerd[1566]: time="2025-09-09T00:36:52.830098372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-kq4nt,Uid:1c844657-1db6-4b2e-b16f-26dd0919f942,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"00016632f066dd4fcbb693df0b8c97c909bb3225f984d8dd1fe7ef3887afa34d\"" Sep 9 00:36:52.830477 containerd[1566]: time="2025-09-09T00:36:52.830419648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ddwrb,Uid:48ce6e8b-6063-41d5-bc05-a6c92eebace9,Namespace:kube-system,Attempt:0,}" Sep 9 00:36:52.832239 containerd[1566]: time="2025-09-09T00:36:52.832215717Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 00:36:52.861811 containerd[1566]: time="2025-09-09T00:36:52.861722663Z" level=info msg="connecting to shim 9d72b2ba7f16a052fa0378442585bc798daef1ff45950adbca21a120e118acc9" address="unix:///run/containerd/s/f7f419d9852ecfb47a6fdf9965afe1a499c077ef554838c15474ee3e286aa762" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:36:52.903047 systemd[1]: Started cri-containerd-9d72b2ba7f16a052fa0378442585bc798daef1ff45950adbca21a120e118acc9.scope - libcontainer container 9d72b2ba7f16a052fa0378442585bc798daef1ff45950adbca21a120e118acc9. Sep 9 00:36:52.940330 containerd[1566]: time="2025-09-09T00:36:52.940126092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ddwrb,Uid:48ce6e8b-6063-41d5-bc05-a6c92eebace9,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d72b2ba7f16a052fa0378442585bc798daef1ff45950adbca21a120e118acc9\"" Sep 9 00:36:52.941224 kubelet[2733]: E0909 00:36:52.941127 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:52.948268 containerd[1566]: time="2025-09-09T00:36:52.948197702Z" level=info msg="CreateContainer within sandbox \"9d72b2ba7f16a052fa0378442585bc798daef1ff45950adbca21a120e118acc9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:36:52.977325 containerd[1566]: time="2025-09-09T00:36:52.977174610Z" level=info msg="Container 66135737dcfa86580fa4dda086a31c4aa475d18c3b8f6e0794169400bf26de93: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:36:52.993958 containerd[1566]: time="2025-09-09T00:36:52.993789888Z" level=info msg="CreateContainer within sandbox \"9d72b2ba7f16a052fa0378442585bc798daef1ff45950adbca21a120e118acc9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"66135737dcfa86580fa4dda086a31c4aa475d18c3b8f6e0794169400bf26de93\"" Sep 9 00:36:52.994816 containerd[1566]: time="2025-09-09T00:36:52.994548876Z" level=info msg="StartContainer for \"66135737dcfa86580fa4dda086a31c4aa475d18c3b8f6e0794169400bf26de93\"" Sep 9 00:36:52.997524 containerd[1566]: time="2025-09-09T00:36:52.997233618Z" level=info msg="connecting to shim 66135737dcfa86580fa4dda086a31c4aa475d18c3b8f6e0794169400bf26de93" address="unix:///run/containerd/s/f7f419d9852ecfb47a6fdf9965afe1a499c077ef554838c15474ee3e286aa762" protocol=ttrpc version=3 Sep 9 00:36:53.042258 systemd[1]: Started cri-containerd-66135737dcfa86580fa4dda086a31c4aa475d18c3b8f6e0794169400bf26de93.scope - libcontainer container 66135737dcfa86580fa4dda086a31c4aa475d18c3b8f6e0794169400bf26de93. Sep 9 00:36:53.107967 containerd[1566]: time="2025-09-09T00:36:53.107909745Z" level=info msg="StartContainer for \"66135737dcfa86580fa4dda086a31c4aa475d18c3b8f6e0794169400bf26de93\" returns successfully" Sep 9 00:36:53.300145 kubelet[2733]: E0909 00:36:53.297939 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:53.322272 kubelet[2733]: I0909 00:36:53.322096 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ddwrb" podStartSLOduration=2.322072759 podStartE2EDuration="2.322072759s" podCreationTimestamp="2025-09-09 00:36:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:36:53.321803913 +0000 UTC m=+5.152686115" watchObservedRunningTime="2025-09-09 00:36:53.322072759 +0000 UTC m=+5.152954961" Sep 9 00:36:54.689833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1233852247.mount: Deactivated successfully. Sep 9 00:36:55.360849 containerd[1566]: time="2025-09-09T00:36:55.360759844Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:55.361910 containerd[1566]: time="2025-09-09T00:36:55.361854923Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 9 00:36:55.363139 containerd[1566]: time="2025-09-09T00:36:55.363100596Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:55.365246 containerd[1566]: time="2025-09-09T00:36:55.365197750Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:36:55.365863 containerd[1566]: time="2025-09-09T00:36:55.365830910Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.533587261s" Sep 9 00:36:55.365863 containerd[1566]: time="2025-09-09T00:36:55.365860036Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 9 00:36:55.368430 containerd[1566]: time="2025-09-09T00:36:55.368013315Z" level=info msg="CreateContainer within sandbox \"00016632f066dd4fcbb693df0b8c97c909bb3225f984d8dd1fe7ef3887afa34d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 00:36:55.378770 containerd[1566]: time="2025-09-09T00:36:55.378721083Z" level=info msg="Container 9d760afa41e50e8029d572b6585569842e9907fbe16d3d7c9d0c50b9313d1029: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:36:55.386995 containerd[1566]: time="2025-09-09T00:36:55.386951983Z" level=info msg="CreateContainer within sandbox \"00016632f066dd4fcbb693df0b8c97c909bb3225f984d8dd1fe7ef3887afa34d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9d760afa41e50e8029d572b6585569842e9907fbe16d3d7c9d0c50b9313d1029\"" Sep 9 00:36:55.387398 containerd[1566]: time="2025-09-09T00:36:55.387366622Z" level=info msg="StartContainer for \"9d760afa41e50e8029d572b6585569842e9907fbe16d3d7c9d0c50b9313d1029\"" Sep 9 00:36:55.388674 containerd[1566]: time="2025-09-09T00:36:55.388532535Z" level=info msg="connecting to shim 9d760afa41e50e8029d572b6585569842e9907fbe16d3d7c9d0c50b9313d1029" address="unix:///run/containerd/s/2d103744d9c9d0aa150c65f1f25e1c80911ce2aaf07b644614ab7701ecda12ff" protocol=ttrpc version=3 Sep 9 00:36:55.446114 systemd[1]: Started cri-containerd-9d760afa41e50e8029d572b6585569842e9907fbe16d3d7c9d0c50b9313d1029.scope - libcontainer container 9d760afa41e50e8029d572b6585569842e9907fbe16d3d7c9d0c50b9313d1029. Sep 9 00:36:55.481357 containerd[1566]: time="2025-09-09T00:36:55.481302514Z" level=info msg="StartContainer for \"9d760afa41e50e8029d572b6585569842e9907fbe16d3d7c9d0c50b9313d1029\" returns successfully" Sep 9 00:36:56.316320 kubelet[2733]: I0909 00:36:56.316234 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-kq4nt" podStartSLOduration=2.781335808 podStartE2EDuration="5.316214028s" podCreationTimestamp="2025-09-09 00:36:51 +0000 UTC" firstStartedPulling="2025-09-09 00:36:52.831762394 +0000 UTC m=+4.662644596" lastFinishedPulling="2025-09-09 00:36:55.366640624 +0000 UTC m=+7.197522816" observedRunningTime="2025-09-09 00:36:56.31600225 +0000 UTC m=+8.146884452" watchObservedRunningTime="2025-09-09 00:36:56.316214028 +0000 UTC m=+8.147096230" Sep 9 00:36:56.647262 kubelet[2733]: E0909 00:36:56.647108 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:57.307911 kubelet[2733]: E0909 00:36:57.307522 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:00.964059 sudo[1770]: pam_unix(sudo:session): session closed for user root Sep 9 00:37:00.966417 sshd[1769]: Connection closed by 10.0.0.1 port 35214 Sep 9 00:37:00.968716 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:00.977011 systemd[1]: sshd@6-10.0.0.5:22-10.0.0.1:35214.service: Deactivated successfully. Sep 9 00:37:00.982987 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:37:00.983759 systemd[1]: session-7.scope: Consumed 5.089s CPU time, 226.6M memory peak. Sep 9 00:37:00.985936 systemd-logind[1550]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:37:00.988564 systemd-logind[1550]: Removed session 7. Sep 9 00:37:07.152583 systemd[1]: Created slice kubepods-besteffort-pod7add462f_6427_4142_9d8d_2e2d726847de.slice - libcontainer container kubepods-besteffort-pod7add462f_6427_4142_9d8d_2e2d726847de.slice. Sep 9 00:37:07.267105 kubelet[2733]: I0909 00:37:07.267039 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7add462f-6427-4142-9d8d-2e2d726847de-typha-certs\") pod \"calico-typha-7f9b4c6d89-s8j6m\" (UID: \"7add462f-6427-4142-9d8d-2e2d726847de\") " pod="calico-system/calico-typha-7f9b4c6d89-s8j6m" Sep 9 00:37:07.267105 kubelet[2733]: I0909 00:37:07.267089 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldfrn\" (UniqueName: \"kubernetes.io/projected/7add462f-6427-4142-9d8d-2e2d726847de-kube-api-access-ldfrn\") pod \"calico-typha-7f9b4c6d89-s8j6m\" (UID: \"7add462f-6427-4142-9d8d-2e2d726847de\") " pod="calico-system/calico-typha-7f9b4c6d89-s8j6m" Sep 9 00:37:07.267105 kubelet[2733]: I0909 00:37:07.267115 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7add462f-6427-4142-9d8d-2e2d726847de-tigera-ca-bundle\") pod \"calico-typha-7f9b4c6d89-s8j6m\" (UID: \"7add462f-6427-4142-9d8d-2e2d726847de\") " pod="calico-system/calico-typha-7f9b4c6d89-s8j6m" Sep 9 00:37:07.939232 systemd[1]: Created slice kubepods-besteffort-pode9a35d49_837f_4757_aa7b_324fc63f477d.slice - libcontainer container kubepods-besteffort-pode9a35d49_837f_4757_aa7b_324fc63f477d.slice. Sep 9 00:37:07.971904 kubelet[2733]: I0909 00:37:07.971575 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e9a35d49-837f-4757-aa7b-324fc63f477d-var-run-calico\") pod \"calico-node-q89l8\" (UID: \"e9a35d49-837f-4757-aa7b-324fc63f477d\") " pod="calico-system/calico-node-q89l8" Sep 9 00:37:07.971904 kubelet[2733]: I0909 00:37:07.971638 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9a35d49-837f-4757-aa7b-324fc63f477d-lib-modules\") pod \"calico-node-q89l8\" (UID: \"e9a35d49-837f-4757-aa7b-324fc63f477d\") " pod="calico-system/calico-node-q89l8" Sep 9 00:37:07.971904 kubelet[2733]: I0909 00:37:07.971658 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e9a35d49-837f-4757-aa7b-324fc63f477d-var-lib-calico\") pod \"calico-node-q89l8\" (UID: \"e9a35d49-837f-4757-aa7b-324fc63f477d\") " pod="calico-system/calico-node-q89l8" Sep 9 00:37:07.971904 kubelet[2733]: I0909 00:37:07.971682 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e9a35d49-837f-4757-aa7b-324fc63f477d-cni-log-dir\") pod \"calico-node-q89l8\" (UID: \"e9a35d49-837f-4757-aa7b-324fc63f477d\") " pod="calico-system/calico-node-q89l8" Sep 9 00:37:07.971904 kubelet[2733]: I0909 00:37:07.971701 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e9a35d49-837f-4757-aa7b-324fc63f477d-cni-net-dir\") pod \"calico-node-q89l8\" (UID: \"e9a35d49-837f-4757-aa7b-324fc63f477d\") " pod="calico-system/calico-node-q89l8" Sep 9 00:37:07.972243 kubelet[2733]: I0909 00:37:07.971723 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e9a35d49-837f-4757-aa7b-324fc63f477d-flexvol-driver-host\") pod \"calico-node-q89l8\" (UID: \"e9a35d49-837f-4757-aa7b-324fc63f477d\") " pod="calico-system/calico-node-q89l8" Sep 9 00:37:07.972243 kubelet[2733]: I0909 00:37:07.971756 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9a35d49-837f-4757-aa7b-324fc63f477d-tigera-ca-bundle\") pod \"calico-node-q89l8\" (UID: \"e9a35d49-837f-4757-aa7b-324fc63f477d\") " pod="calico-system/calico-node-q89l8" Sep 9 00:37:07.972243 kubelet[2733]: I0909 00:37:07.971776 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e9a35d49-837f-4757-aa7b-324fc63f477d-policysync\") pod \"calico-node-q89l8\" (UID: \"e9a35d49-837f-4757-aa7b-324fc63f477d\") " pod="calico-system/calico-node-q89l8" Sep 9 00:37:07.972243 kubelet[2733]: I0909 00:37:07.971797 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e9a35d49-837f-4757-aa7b-324fc63f477d-cni-bin-dir\") pod \"calico-node-q89l8\" (UID: \"e9a35d49-837f-4757-aa7b-324fc63f477d\") " pod="calico-system/calico-node-q89l8" Sep 9 00:37:07.972243 kubelet[2733]: I0909 00:37:07.971820 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rw69\" (UniqueName: \"kubernetes.io/projected/e9a35d49-837f-4757-aa7b-324fc63f477d-kube-api-access-6rw69\") pod \"calico-node-q89l8\" (UID: \"e9a35d49-837f-4757-aa7b-324fc63f477d\") " pod="calico-system/calico-node-q89l8" Sep 9 00:37:07.972411 kubelet[2733]: I0909 00:37:07.971846 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e9a35d49-837f-4757-aa7b-324fc63f477d-node-certs\") pod \"calico-node-q89l8\" (UID: \"e9a35d49-837f-4757-aa7b-324fc63f477d\") " pod="calico-system/calico-node-q89l8" Sep 9 00:37:07.972635 kubelet[2733]: I0909 00:37:07.972558 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9a35d49-837f-4757-aa7b-324fc63f477d-xtables-lock\") pod \"calico-node-q89l8\" (UID: \"e9a35d49-837f-4757-aa7b-324fc63f477d\") " pod="calico-system/calico-node-q89l8" Sep 9 00:37:08.046522 kubelet[2733]: E0909 00:37:08.046449 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bdgfl" podUID="e2e3f318-b326-4ebf-beea-35cea16bcc19" Sep 9 00:37:08.056699 kubelet[2733]: E0909 00:37:08.056514 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:08.059196 containerd[1566]: time="2025-09-09T00:37:08.059149993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f9b4c6d89-s8j6m,Uid:7add462f-6427-4142-9d8d-2e2d726847de,Namespace:calico-system,Attempt:0,}" Sep 9 00:37:08.072901 kubelet[2733]: I0909 00:37:08.072848 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e2e3f318-b326-4ebf-beea-35cea16bcc19-registration-dir\") pod \"csi-node-driver-bdgfl\" (UID: \"e2e3f318-b326-4ebf-beea-35cea16bcc19\") " pod="calico-system/csi-node-driver-bdgfl" Sep 9 00:37:08.073039 kubelet[2733]: I0909 00:37:08.072955 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2e3f318-b326-4ebf-beea-35cea16bcc19-kubelet-dir\") pod \"csi-node-driver-bdgfl\" (UID: \"e2e3f318-b326-4ebf-beea-35cea16bcc19\") " pod="calico-system/csi-node-driver-bdgfl" Sep 9 00:37:08.073039 kubelet[2733]: I0909 00:37:08.073007 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5b5r\" (UniqueName: \"kubernetes.io/projected/e2e3f318-b326-4ebf-beea-35cea16bcc19-kube-api-access-c5b5r\") pod \"csi-node-driver-bdgfl\" (UID: \"e2e3f318-b326-4ebf-beea-35cea16bcc19\") " pod="calico-system/csi-node-driver-bdgfl" Sep 9 00:37:08.073095 kubelet[2733]: I0909 00:37:08.073056 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e2e3f318-b326-4ebf-beea-35cea16bcc19-varrun\") pod \"csi-node-driver-bdgfl\" (UID: \"e2e3f318-b326-4ebf-beea-35cea16bcc19\") " pod="calico-system/csi-node-driver-bdgfl" Sep 9 00:37:08.073133 kubelet[2733]: I0909 00:37:08.073090 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e2e3f318-b326-4ebf-beea-35cea16bcc19-socket-dir\") pod \"csi-node-driver-bdgfl\" (UID: \"e2e3f318-b326-4ebf-beea-35cea16bcc19\") " pod="calico-system/csi-node-driver-bdgfl" Sep 9 00:37:08.079468 kubelet[2733]: E0909 00:37:08.079340 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.079468 kubelet[2733]: W0909 00:37:08.079373 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.079468 kubelet[2733]: E0909 00:37:08.079408 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.096629 kubelet[2733]: E0909 00:37:08.096593 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.096629 kubelet[2733]: W0909 00:37:08.096619 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.096815 kubelet[2733]: E0909 00:37:08.096645 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.174450 kubelet[2733]: E0909 00:37:08.174420 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.174450 kubelet[2733]: W0909 00:37:08.174435 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.174450 kubelet[2733]: E0909 00:37:08.174451 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.174689 kubelet[2733]: E0909 00:37:08.174663 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.174689 kubelet[2733]: W0909 00:37:08.174677 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.174756 kubelet[2733]: E0909 00:37:08.174692 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.174965 kubelet[2733]: E0909 00:37:08.174934 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.174965 kubelet[2733]: W0909 00:37:08.174951 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.174965 kubelet[2733]: E0909 00:37:08.174966 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.175211 kubelet[2733]: E0909 00:37:08.175184 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.175211 kubelet[2733]: W0909 00:37:08.175197 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.175295 kubelet[2733]: E0909 00:37:08.175213 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.175439 kubelet[2733]: E0909 00:37:08.175423 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.175489 kubelet[2733]: W0909 00:37:08.175448 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.175489 kubelet[2733]: E0909 00:37:08.175464 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.175705 kubelet[2733]: E0909 00:37:08.175682 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.175705 kubelet[2733]: W0909 00:37:08.175698 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.175787 kubelet[2733]: E0909 00:37:08.175716 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.175941 kubelet[2733]: E0909 00:37:08.175921 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.175941 kubelet[2733]: W0909 00:37:08.175935 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.176045 kubelet[2733]: E0909 00:37:08.175951 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.176165 kubelet[2733]: E0909 00:37:08.176147 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.176165 kubelet[2733]: W0909 00:37:08.176160 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.176249 kubelet[2733]: E0909 00:37:08.176185 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.176352 kubelet[2733]: E0909 00:37:08.176333 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.176352 kubelet[2733]: W0909 00:37:08.176345 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.176427 kubelet[2733]: E0909 00:37:08.176366 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.176536 kubelet[2733]: E0909 00:37:08.176518 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.176536 kubelet[2733]: W0909 00:37:08.176530 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.176634 kubelet[2733]: E0909 00:37:08.176554 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.176729 kubelet[2733]: E0909 00:37:08.176709 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.176729 kubelet[2733]: W0909 00:37:08.176722 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.176813 kubelet[2733]: E0909 00:37:08.176742 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.176939 kubelet[2733]: E0909 00:37:08.176920 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.176939 kubelet[2733]: W0909 00:37:08.176932 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.177019 kubelet[2733]: E0909 00:37:08.176954 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.177177 kubelet[2733]: E0909 00:37:08.177158 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.177177 kubelet[2733]: W0909 00:37:08.177171 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.177254 kubelet[2733]: E0909 00:37:08.177187 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.177408 kubelet[2733]: E0909 00:37:08.177376 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.177408 kubelet[2733]: W0909 00:37:08.177394 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.177489 kubelet[2733]: E0909 00:37:08.177409 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.177602 kubelet[2733]: E0909 00:37:08.177582 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.177602 kubelet[2733]: W0909 00:37:08.177594 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.177674 kubelet[2733]: E0909 00:37:08.177610 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.177845 kubelet[2733]: E0909 00:37:08.177828 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.177845 kubelet[2733]: W0909 00:37:08.177839 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.177953 kubelet[2733]: E0909 00:37:08.177855 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.178039 kubelet[2733]: E0909 00:37:08.178024 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.178039 kubelet[2733]: W0909 00:37:08.178034 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.178124 kubelet[2733]: E0909 00:37:08.178062 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.178206 kubelet[2733]: E0909 00:37:08.178192 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.178206 kubelet[2733]: W0909 00:37:08.178201 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.178268 kubelet[2733]: E0909 00:37:08.178229 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.178361 kubelet[2733]: E0909 00:37:08.178348 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.178361 kubelet[2733]: W0909 00:37:08.178357 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.178425 kubelet[2733]: E0909 00:37:08.178382 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.178528 kubelet[2733]: E0909 00:37:08.178514 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.178528 kubelet[2733]: W0909 00:37:08.178524 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.178589 kubelet[2733]: E0909 00:37:08.178535 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.178712 kubelet[2733]: E0909 00:37:08.178698 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.178712 kubelet[2733]: W0909 00:37:08.178708 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.178774 kubelet[2733]: E0909 00:37:08.178719 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.178905 kubelet[2733]: E0909 00:37:08.178865 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.178905 kubelet[2733]: W0909 00:37:08.178892 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.178905 kubelet[2733]: E0909 00:37:08.178905 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.179092 kubelet[2733]: E0909 00:37:08.179078 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.179092 kubelet[2733]: W0909 00:37:08.179087 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.179176 kubelet[2733]: E0909 00:37:08.179099 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.179275 kubelet[2733]: E0909 00:37:08.179261 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.179275 kubelet[2733]: W0909 00:37:08.179271 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.179337 kubelet[2733]: E0909 00:37:08.179281 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.179457 kubelet[2733]: E0909 00:37:08.179443 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.179457 kubelet[2733]: W0909 00:37:08.179452 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.179525 kubelet[2733]: E0909 00:37:08.179461 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.242869 kubelet[2733]: E0909 00:37:08.242704 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:37:08.242869 kubelet[2733]: W0909 00:37:08.242772 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:37:08.242869 kubelet[2733]: E0909 00:37:08.242792 2733 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:37:08.249902 containerd[1566]: time="2025-09-09T00:37:08.249821197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q89l8,Uid:e9a35d49-837f-4757-aa7b-324fc63f477d,Namespace:calico-system,Attempt:0,}" Sep 9 00:37:08.276336 containerd[1566]: time="2025-09-09T00:37:08.275724664Z" level=info msg="connecting to shim d38c4660cebdb8502f3b446a7a1e63a673879c471ed030707c583293e670e041" address="unix:///run/containerd/s/8127e8694ef166d0d3768a6896953ade558e4324c2503d210df79688a4513e24" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:37:08.286840 containerd[1566]: time="2025-09-09T00:37:08.283863236Z" level=info msg="connecting to shim c373dfcce3c301a449dbbb0e3819ea1982404e85e7e8fb23fd8424bb03a8583b" address="unix:///run/containerd/s/89f60f8927b513c580108db30f261735f5d2523d828b3b59af59c91158f81ce5" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:37:08.305397 systemd[1]: Started cri-containerd-d38c4660cebdb8502f3b446a7a1e63a673879c471ed030707c583293e670e041.scope - libcontainer container d38c4660cebdb8502f3b446a7a1e63a673879c471ed030707c583293e670e041. Sep 9 00:37:08.327114 systemd[1]: Started cri-containerd-c373dfcce3c301a449dbbb0e3819ea1982404e85e7e8fb23fd8424bb03a8583b.scope - libcontainer container c373dfcce3c301a449dbbb0e3819ea1982404e85e7e8fb23fd8424bb03a8583b. Sep 9 00:37:08.371556 containerd[1566]: time="2025-09-09T00:37:08.371382727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q89l8,Uid:e9a35d49-837f-4757-aa7b-324fc63f477d,Namespace:calico-system,Attempt:0,} returns sandbox id \"c373dfcce3c301a449dbbb0e3819ea1982404e85e7e8fb23fd8424bb03a8583b\"" Sep 9 00:37:08.373666 containerd[1566]: time="2025-09-09T00:37:08.373613115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f9b4c6d89-s8j6m,Uid:7add462f-6427-4142-9d8d-2e2d726847de,Namespace:calico-system,Attempt:0,} returns sandbox id \"d38c4660cebdb8502f3b446a7a1e63a673879c471ed030707c583293e670e041\"" Sep 9 00:37:08.373979 containerd[1566]: time="2025-09-09T00:37:08.373955027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 00:37:08.374361 kubelet[2733]: E0909 00:37:08.374220 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:09.265204 kubelet[2733]: E0909 00:37:09.265118 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bdgfl" podUID="e2e3f318-b326-4ebf-beea-35cea16bcc19" Sep 9 00:37:10.033345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3654701232.mount: Deactivated successfully. Sep 9 00:37:10.093227 containerd[1566]: time="2025-09-09T00:37:10.093147701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:10.093907 containerd[1566]: time="2025-09-09T00:37:10.093849429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5939501" Sep 9 00:37:10.094957 containerd[1566]: time="2025-09-09T00:37:10.094920119Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:10.096847 containerd[1566]: time="2025-09-09T00:37:10.096805478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:10.097265 containerd[1566]: time="2025-09-09T00:37:10.097235274Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.723055867s" Sep 9 00:37:10.097299 containerd[1566]: time="2025-09-09T00:37:10.097265561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 9 00:37:10.098323 containerd[1566]: time="2025-09-09T00:37:10.098282941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 00:37:10.100318 containerd[1566]: time="2025-09-09T00:37:10.099754474Z" level=info msg="CreateContainer within sandbox \"c373dfcce3c301a449dbbb0e3819ea1982404e85e7e8fb23fd8424bb03a8583b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 00:37:10.112901 containerd[1566]: time="2025-09-09T00:37:10.110180218Z" level=info msg="Container 5c528e6f2b64695637594ebf0c12ba56aeeb9446d89b5c449ae2ebbba36a49ed: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:37:10.120247 containerd[1566]: time="2025-09-09T00:37:10.120212402Z" level=info msg="CreateContainer within sandbox \"c373dfcce3c301a449dbbb0e3819ea1982404e85e7e8fb23fd8424bb03a8583b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5c528e6f2b64695637594ebf0c12ba56aeeb9446d89b5c449ae2ebbba36a49ed\"" Sep 9 00:37:10.120679 containerd[1566]: time="2025-09-09T00:37:10.120656505Z" level=info msg="StartContainer for \"5c528e6f2b64695637594ebf0c12ba56aeeb9446d89b5c449ae2ebbba36a49ed\"" Sep 9 00:37:10.122016 containerd[1566]: time="2025-09-09T00:37:10.121994166Z" level=info msg="connecting to shim 5c528e6f2b64695637594ebf0c12ba56aeeb9446d89b5c449ae2ebbba36a49ed" address="unix:///run/containerd/s/89f60f8927b513c580108db30f261735f5d2523d828b3b59af59c91158f81ce5" protocol=ttrpc version=3 Sep 9 00:37:10.151042 systemd[1]: Started cri-containerd-5c528e6f2b64695637594ebf0c12ba56aeeb9446d89b5c449ae2ebbba36a49ed.scope - libcontainer container 5c528e6f2b64695637594ebf0c12ba56aeeb9446d89b5c449ae2ebbba36a49ed. Sep 9 00:37:10.210498 systemd[1]: cri-containerd-5c528e6f2b64695637594ebf0c12ba56aeeb9446d89b5c449ae2ebbba36a49ed.scope: Deactivated successfully. Sep 9 00:37:10.212324 containerd[1566]: time="2025-09-09T00:37:10.212288222Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5c528e6f2b64695637594ebf0c12ba56aeeb9446d89b5c449ae2ebbba36a49ed\" id:\"5c528e6f2b64695637594ebf0c12ba56aeeb9446d89b5c449ae2ebbba36a49ed\" pid:3293 exited_at:{seconds:1757378230 nanos:211754039}" Sep 9 00:37:10.214750 containerd[1566]: time="2025-09-09T00:37:10.214714678Z" level=info msg="received exit event container_id:\"5c528e6f2b64695637594ebf0c12ba56aeeb9446d89b5c449ae2ebbba36a49ed\" id:\"5c528e6f2b64695637594ebf0c12ba56aeeb9446d89b5c449ae2ebbba36a49ed\" pid:3293 exited_at:{seconds:1757378230 nanos:211754039}" Sep 9 00:37:10.217013 containerd[1566]: time="2025-09-09T00:37:10.216971875Z" level=info msg="StartContainer for \"5c528e6f2b64695637594ebf0c12ba56aeeb9446d89b5c449ae2ebbba36a49ed\" returns successfully" Sep 9 00:37:11.008770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c528e6f2b64695637594ebf0c12ba56aeeb9446d89b5c449ae2ebbba36a49ed-rootfs.mount: Deactivated successfully. Sep 9 00:37:11.265710 kubelet[2733]: E0909 00:37:11.265510 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bdgfl" podUID="e2e3f318-b326-4ebf-beea-35cea16bcc19" Sep 9 00:37:12.092240 containerd[1566]: time="2025-09-09T00:37:12.092144875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:12.093135 containerd[1566]: time="2025-09-09T00:37:12.093046336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33744548" Sep 9 00:37:12.094744 containerd[1566]: time="2025-09-09T00:37:12.094712364Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:12.096946 containerd[1566]: time="2025-09-09T00:37:12.096893238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:12.097711 containerd[1566]: time="2025-09-09T00:37:12.097651562Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 1.999319146s" Sep 9 00:37:12.097711 containerd[1566]: time="2025-09-09T00:37:12.097709420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 9 00:37:12.098893 containerd[1566]: time="2025-09-09T00:37:12.098840874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 00:37:12.107687 containerd[1566]: time="2025-09-09T00:37:12.107622718Z" level=info msg="CreateContainer within sandbox \"d38c4660cebdb8502f3b446a7a1e63a673879c471ed030707c583293e670e041\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 00:37:12.121428 containerd[1566]: time="2025-09-09T00:37:12.120535357Z" level=info msg="Container ec6fe1b480d6ce0deabbe39815586e4afba58b9c8d77ff0778febb5dea483797: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:37:12.124570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2297997763.mount: Deactivated successfully. Sep 9 00:37:12.131627 containerd[1566]: time="2025-09-09T00:37:12.131564370Z" level=info msg="CreateContainer within sandbox \"d38c4660cebdb8502f3b446a7a1e63a673879c471ed030707c583293e670e041\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ec6fe1b480d6ce0deabbe39815586e4afba58b9c8d77ff0778febb5dea483797\"" Sep 9 00:37:12.132313 containerd[1566]: time="2025-09-09T00:37:12.132247753Z" level=info msg="StartContainer for \"ec6fe1b480d6ce0deabbe39815586e4afba58b9c8d77ff0778febb5dea483797\"" Sep 9 00:37:12.133670 containerd[1566]: time="2025-09-09T00:37:12.133638072Z" level=info msg="connecting to shim ec6fe1b480d6ce0deabbe39815586e4afba58b9c8d77ff0778febb5dea483797" address="unix:///run/containerd/s/8127e8694ef166d0d3768a6896953ade558e4324c2503d210df79688a4513e24" protocol=ttrpc version=3 Sep 9 00:37:12.158108 systemd[1]: Started cri-containerd-ec6fe1b480d6ce0deabbe39815586e4afba58b9c8d77ff0778febb5dea483797.scope - libcontainer container ec6fe1b480d6ce0deabbe39815586e4afba58b9c8d77ff0778febb5dea483797. Sep 9 00:37:12.214538 containerd[1566]: time="2025-09-09T00:37:12.214468161Z" level=info msg="StartContainer for \"ec6fe1b480d6ce0deabbe39815586e4afba58b9c8d77ff0778febb5dea483797\" returns successfully" Sep 9 00:37:12.342863 kubelet[2733]: E0909 00:37:12.342729 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:12.384403 kubelet[2733]: I0909 00:37:12.384317 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f9b4c6d89-s8j6m" podStartSLOduration=1.661309099 podStartE2EDuration="5.384293695s" podCreationTimestamp="2025-09-09 00:37:07 +0000 UTC" firstStartedPulling="2025-09-09 00:37:08.375662894 +0000 UTC m=+20.206545096" lastFinishedPulling="2025-09-09 00:37:12.09864749 +0000 UTC m=+23.929529692" observedRunningTime="2025-09-09 00:37:12.384230767 +0000 UTC m=+24.215112969" watchObservedRunningTime="2025-09-09 00:37:12.384293695 +0000 UTC m=+24.215175897" Sep 9 00:37:13.265151 kubelet[2733]: E0909 00:37:13.265015 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bdgfl" podUID="e2e3f318-b326-4ebf-beea-35cea16bcc19" Sep 9 00:37:13.344193 kubelet[2733]: I0909 00:37:13.344122 2733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:37:13.344705 kubelet[2733]: E0909 00:37:13.344490 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:15.266730 kubelet[2733]: E0909 00:37:15.266644 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bdgfl" podUID="e2e3f318-b326-4ebf-beea-35cea16bcc19" Sep 9 00:37:16.191111 containerd[1566]: time="2025-09-09T00:37:16.191036828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:16.191922 containerd[1566]: time="2025-09-09T00:37:16.191841990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 9 00:37:16.193058 containerd[1566]: time="2025-09-09T00:37:16.193017836Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:16.194942 containerd[1566]: time="2025-09-09T00:37:16.194822173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:16.195662 containerd[1566]: time="2025-09-09T00:37:16.195610362Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.096709766s" Sep 9 00:37:16.195662 containerd[1566]: time="2025-09-09T00:37:16.195648273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 9 00:37:16.197805 containerd[1566]: time="2025-09-09T00:37:16.197773261Z" level=info msg="CreateContainer within sandbox \"c373dfcce3c301a449dbbb0e3819ea1982404e85e7e8fb23fd8424bb03a8583b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 00:37:16.207937 containerd[1566]: time="2025-09-09T00:37:16.207857786Z" level=info msg="Container 3460d909b00abf7ab64b7363d7e183b4cf5daa45ee83fe97d2dcd496607eb714: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:37:16.218998 containerd[1566]: time="2025-09-09T00:37:16.218947479Z" level=info msg="CreateContainer within sandbox \"c373dfcce3c301a449dbbb0e3819ea1982404e85e7e8fb23fd8424bb03a8583b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3460d909b00abf7ab64b7363d7e183b4cf5daa45ee83fe97d2dcd496607eb714\"" Sep 9 00:37:16.219557 containerd[1566]: time="2025-09-09T00:37:16.219520504Z" level=info msg="StartContainer for \"3460d909b00abf7ab64b7363d7e183b4cf5daa45ee83fe97d2dcd496607eb714\"" Sep 9 00:37:16.221109 containerd[1566]: time="2025-09-09T00:37:16.221075974Z" level=info msg="connecting to shim 3460d909b00abf7ab64b7363d7e183b4cf5daa45ee83fe97d2dcd496607eb714" address="unix:///run/containerd/s/89f60f8927b513c580108db30f261735f5d2523d828b3b59af59c91158f81ce5" protocol=ttrpc version=3 Sep 9 00:37:16.249141 systemd[1]: Started cri-containerd-3460d909b00abf7ab64b7363d7e183b4cf5daa45ee83fe97d2dcd496607eb714.scope - libcontainer container 3460d909b00abf7ab64b7363d7e183b4cf5daa45ee83fe97d2dcd496607eb714. Sep 9 00:37:16.294942 containerd[1566]: time="2025-09-09T00:37:16.294840369Z" level=info msg="StartContainer for \"3460d909b00abf7ab64b7363d7e183b4cf5daa45ee83fe97d2dcd496607eb714\" returns successfully" Sep 9 00:37:17.264841 kubelet[2733]: E0909 00:37:17.264764 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bdgfl" podUID="e2e3f318-b326-4ebf-beea-35cea16bcc19" Sep 9 00:37:17.291903 systemd[1]: cri-containerd-3460d909b00abf7ab64b7363d7e183b4cf5daa45ee83fe97d2dcd496607eb714.scope: Deactivated successfully. Sep 9 00:37:17.292279 systemd[1]: cri-containerd-3460d909b00abf7ab64b7363d7e183b4cf5daa45ee83fe97d2dcd496607eb714.scope: Consumed 651ms CPU time, 178M memory peak, 3.3M read from disk, 171.3M written to disk. Sep 9 00:37:17.292799 containerd[1566]: time="2025-09-09T00:37:17.292733365Z" level=info msg="received exit event container_id:\"3460d909b00abf7ab64b7363d7e183b4cf5daa45ee83fe97d2dcd496607eb714\" id:\"3460d909b00abf7ab64b7363d7e183b4cf5daa45ee83fe97d2dcd496607eb714\" pid:3396 exited_at:{seconds:1757378237 nanos:292317205}" Sep 9 00:37:17.293402 containerd[1566]: time="2025-09-09T00:37:17.293340104Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3460d909b00abf7ab64b7363d7e183b4cf5daa45ee83fe97d2dcd496607eb714\" id:\"3460d909b00abf7ab64b7363d7e183b4cf5daa45ee83fe97d2dcd496607eb714\" pid:3396 exited_at:{seconds:1757378237 nanos:292317205}" Sep 9 00:37:17.299264 containerd[1566]: time="2025-09-09T00:37:17.299217954Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:37:17.321143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3460d909b00abf7ab64b7363d7e183b4cf5daa45ee83fe97d2dcd496607eb714-rootfs.mount: Deactivated successfully. Sep 9 00:37:17.324089 kubelet[2733]: I0909 00:37:17.324048 2733 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 00:37:17.360840 systemd[1]: Created slice kubepods-burstable-pod05b853f1_12c7_471b_853c_d97bde5dec17.slice - libcontainer container kubepods-burstable-pod05b853f1_12c7_471b_853c_d97bde5dec17.slice. Sep 9 00:37:17.396496 systemd[1]: Created slice kubepods-burstable-podbfb4a09d_bdfb_4efb_a83c_d7bb472ba089.slice - libcontainer container kubepods-burstable-podbfb4a09d_bdfb_4efb_a83c_d7bb472ba089.slice. Sep 9 00:37:17.402922 systemd[1]: Created slice kubepods-besteffort-pod0cd1dd39_fe2c_45f4_8309_3b93ea396e71.slice - libcontainer container kubepods-besteffort-pod0cd1dd39_fe2c_45f4_8309_3b93ea396e71.slice. Sep 9 00:37:17.408582 systemd[1]: Created slice kubepods-besteffort-pod5eabb735_48c0_4991_a154_a56147deb87c.slice - libcontainer container kubepods-besteffort-pod5eabb735_48c0_4991_a154_a56147deb87c.slice. Sep 9 00:37:17.414833 systemd[1]: Created slice kubepods-besteffort-poddffc82f8_bf4b_4a16_a1a2_4a73ee928eb9.slice - libcontainer container kubepods-besteffort-poddffc82f8_bf4b_4a16_a1a2_4a73ee928eb9.slice. Sep 9 00:37:17.421100 systemd[1]: Created slice kubepods-besteffort-podca1b3cef_013c_458e_9dbd_c26a121d5707.slice - libcontainer container kubepods-besteffort-podca1b3cef_013c_458e_9dbd_c26a121d5707.slice. Sep 9 00:37:17.428042 systemd[1]: Created slice kubepods-besteffort-podd5247415_ec51_4af2_9c8d_7db9f264eca7.slice - libcontainer container kubepods-besteffort-podd5247415_ec51_4af2_9c8d_7db9f264eca7.slice. Sep 9 00:37:17.477470 kubelet[2733]: I0909 00:37:17.477400 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05b853f1-12c7-471b-853c-d97bde5dec17-config-volume\") pod \"coredns-7c65d6cfc9-p62qs\" (UID: \"05b853f1-12c7-471b-853c-d97bde5dec17\") " pod="kube-system/coredns-7c65d6cfc9-p62qs" Sep 9 00:37:17.477860 kubelet[2733]: I0909 00:37:17.477782 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb5cb\" (UniqueName: \"kubernetes.io/projected/05b853f1-12c7-471b-853c-d97bde5dec17-kube-api-access-zb5cb\") pod \"coredns-7c65d6cfc9-p62qs\" (UID: \"05b853f1-12c7-471b-853c-d97bde5dec17\") " pod="kube-system/coredns-7c65d6cfc9-p62qs" Sep 9 00:37:17.578417 kubelet[2733]: I0909 00:37:17.578355 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hlnb\" (UniqueName: \"kubernetes.io/projected/5eabb735-48c0-4991-a154-a56147deb87c-kube-api-access-6hlnb\") pod \"calico-apiserver-7df6bdd7ff-ldv9n\" (UID: \"5eabb735-48c0-4991-a154-a56147deb87c\") " pod="calico-apiserver/calico-apiserver-7df6bdd7ff-ldv9n" Sep 9 00:37:17.578417 kubelet[2733]: I0909 00:37:17.578404 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbt94\" (UniqueName: \"kubernetes.io/projected/ca1b3cef-013c-458e-9dbd-c26a121d5707-kube-api-access-sbt94\") pod \"calico-kube-controllers-6fc7d657c6-nwxlc\" (UID: \"ca1b3cef-013c-458e-9dbd-c26a121d5707\") " pod="calico-system/calico-kube-controllers-6fc7d657c6-nwxlc" Sep 9 00:37:17.578417 kubelet[2733]: I0909 00:37:17.578425 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/dffc82f8-bf4b-4a16-a1a2-4a73ee928eb9-goldmane-key-pair\") pod \"goldmane-7988f88666-nktsk\" (UID: \"dffc82f8-bf4b-4a16-a1a2-4a73ee928eb9\") " pod="calico-system/goldmane-7988f88666-nktsk" Sep 9 00:37:17.578651 kubelet[2733]: I0909 00:37:17.578442 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42dl6\" (UniqueName: \"kubernetes.io/projected/bfb4a09d-bdfb-4efb-a83c-d7bb472ba089-kube-api-access-42dl6\") pod \"coredns-7c65d6cfc9-bfkfk\" (UID: \"bfb4a09d-bdfb-4efb-a83c-d7bb472ba089\") " pod="kube-system/coredns-7c65d6cfc9-bfkfk" Sep 9 00:37:17.578651 kubelet[2733]: I0909 00:37:17.578478 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca1b3cef-013c-458e-9dbd-c26a121d5707-tigera-ca-bundle\") pod \"calico-kube-controllers-6fc7d657c6-nwxlc\" (UID: \"ca1b3cef-013c-458e-9dbd-c26a121d5707\") " pod="calico-system/calico-kube-controllers-6fc7d657c6-nwxlc" Sep 9 00:37:17.578651 kubelet[2733]: I0909 00:37:17.578556 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dffc82f8-bf4b-4a16-a1a2-4a73ee928eb9-goldmane-ca-bundle\") pod \"goldmane-7988f88666-nktsk\" (UID: \"dffc82f8-bf4b-4a16-a1a2-4a73ee928eb9\") " pod="calico-system/goldmane-7988f88666-nktsk" Sep 9 00:37:17.578651 kubelet[2733]: I0909 00:37:17.578637 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ptwr\" (UniqueName: \"kubernetes.io/projected/d5247415-ec51-4af2-9c8d-7db9f264eca7-kube-api-access-4ptwr\") pod \"whisker-5946df5ddf-wf8gv\" (UID: \"d5247415-ec51-4af2-9c8d-7db9f264eca7\") " pod="calico-system/whisker-5946df5ddf-wf8gv" Sep 9 00:37:17.578750 kubelet[2733]: I0909 00:37:17.578657 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5eabb735-48c0-4991-a154-a56147deb87c-calico-apiserver-certs\") pod \"calico-apiserver-7df6bdd7ff-ldv9n\" (UID: \"5eabb735-48c0-4991-a154-a56147deb87c\") " pod="calico-apiserver/calico-apiserver-7df6bdd7ff-ldv9n" Sep 9 00:37:17.578750 kubelet[2733]: I0909 00:37:17.578682 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dffc82f8-bf4b-4a16-a1a2-4a73ee928eb9-config\") pod \"goldmane-7988f88666-nktsk\" (UID: \"dffc82f8-bf4b-4a16-a1a2-4a73ee928eb9\") " pod="calico-system/goldmane-7988f88666-nktsk" Sep 9 00:37:17.578750 kubelet[2733]: I0909 00:37:17.578706 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gjcw\" (UniqueName: \"kubernetes.io/projected/dffc82f8-bf4b-4a16-a1a2-4a73ee928eb9-kube-api-access-8gjcw\") pod \"goldmane-7988f88666-nktsk\" (UID: \"dffc82f8-bf4b-4a16-a1a2-4a73ee928eb9\") " pod="calico-system/goldmane-7988f88666-nktsk" Sep 9 00:37:17.578750 kubelet[2733]: I0909 00:37:17.578731 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5247415-ec51-4af2-9c8d-7db9f264eca7-whisker-ca-bundle\") pod \"whisker-5946df5ddf-wf8gv\" (UID: \"d5247415-ec51-4af2-9c8d-7db9f264eca7\") " pod="calico-system/whisker-5946df5ddf-wf8gv" Sep 9 00:37:17.578750 kubelet[2733]: I0909 00:37:17.578749 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0cd1dd39-fe2c-45f4-8309-3b93ea396e71-calico-apiserver-certs\") pod \"calico-apiserver-7df6bdd7ff-rmz9h\" (UID: \"0cd1dd39-fe2c-45f4-8309-3b93ea396e71\") " pod="calico-apiserver/calico-apiserver-7df6bdd7ff-rmz9h" Sep 9 00:37:17.578896 kubelet[2733]: I0909 00:37:17.578765 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k9w5\" (UniqueName: \"kubernetes.io/projected/0cd1dd39-fe2c-45f4-8309-3b93ea396e71-kube-api-access-6k9w5\") pod \"calico-apiserver-7df6bdd7ff-rmz9h\" (UID: \"0cd1dd39-fe2c-45f4-8309-3b93ea396e71\") " pod="calico-apiserver/calico-apiserver-7df6bdd7ff-rmz9h" Sep 9 00:37:17.578896 kubelet[2733]: I0909 00:37:17.578792 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d5247415-ec51-4af2-9c8d-7db9f264eca7-whisker-backend-key-pair\") pod \"whisker-5946df5ddf-wf8gv\" (UID: \"d5247415-ec51-4af2-9c8d-7db9f264eca7\") " pod="calico-system/whisker-5946df5ddf-wf8gv" Sep 9 00:37:17.578896 kubelet[2733]: I0909 00:37:17.578829 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfb4a09d-bdfb-4efb-a83c-d7bb472ba089-config-volume\") pod \"coredns-7c65d6cfc9-bfkfk\" (UID: \"bfb4a09d-bdfb-4efb-a83c-d7bb472ba089\") " pod="kube-system/coredns-7c65d6cfc9-bfkfk" Sep 9 00:37:17.664841 kubelet[2733]: E0909 00:37:17.664783 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:17.665743 containerd[1566]: time="2025-09-09T00:37:17.665441344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p62qs,Uid:05b853f1-12c7-471b-853c-d97bde5dec17,Namespace:kube-system,Attempt:0,}" Sep 9 00:37:17.999759 kubelet[2733]: E0909 00:37:17.999615 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:18.000790 containerd[1566]: time="2025-09-09T00:37:18.000739859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bfkfk,Uid:bfb4a09d-bdfb-4efb-a83c-d7bb472ba089,Namespace:kube-system,Attempt:0,}" Sep 9 00:37:18.006326 containerd[1566]: time="2025-09-09T00:37:18.006293119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df6bdd7ff-rmz9h,Uid:0cd1dd39-fe2c-45f4-8309-3b93ea396e71,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:37:18.011799 containerd[1566]: time="2025-09-09T00:37:18.011768474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df6bdd7ff-ldv9n,Uid:5eabb735-48c0-4991-a154-a56147deb87c,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:37:18.018457 containerd[1566]: time="2025-09-09T00:37:18.018393226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nktsk,Uid:dffc82f8-bf4b-4a16-a1a2-4a73ee928eb9,Namespace:calico-system,Attempt:0,}" Sep 9 00:37:18.025220 containerd[1566]: time="2025-09-09T00:37:18.025183598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fc7d657c6-nwxlc,Uid:ca1b3cef-013c-458e-9dbd-c26a121d5707,Namespace:calico-system,Attempt:0,}" Sep 9 00:37:18.030762 containerd[1566]: time="2025-09-09T00:37:18.030727442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5946df5ddf-wf8gv,Uid:d5247415-ec51-4af2-9c8d-7db9f264eca7,Namespace:calico-system,Attempt:0,}" Sep 9 00:37:18.375134 containerd[1566]: time="2025-09-09T00:37:18.375085310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 00:37:18.397523 containerd[1566]: time="2025-09-09T00:37:18.397457453Z" level=error msg="Failed to destroy network for sandbox \"4f1a98f381bce89b206cc5d8aec945cdbb8acbd647085c4e493ea82feff2b767\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:18.400748 systemd[1]: run-netns-cni\x2da6b95f75\x2dac6e\x2d6049\x2d818f\x2d50b34546b251.mount: Deactivated successfully. Sep 9 00:37:18.406284 containerd[1566]: time="2025-09-09T00:37:18.406104068Z" level=error msg="Failed to destroy network for sandbox \"6752c694b2f8bbecf3344bd663a059963d5ea037cdc63ae831fd7e73be8d406d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:18.407656 containerd[1566]: time="2025-09-09T00:37:18.407591349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fc7d657c6-nwxlc,Uid:ca1b3cef-013c-458e-9dbd-c26a121d5707,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f1a98f381bce89b206cc5d8aec945cdbb8acbd647085c4e493ea82feff2b767\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:18.410267 systemd[1]: run-netns-cni\x2d652ef713\x2de174\x2def7f\x2d05d0\x2de88d195148c5.mount: Deactivated successfully. Sep 9 00:37:18.414568 containerd[1566]: time="2025-09-09T00:37:18.414513088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p62qs,Uid:05b853f1-12c7-471b-853c-d97bde5dec17,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6752c694b2f8bbecf3344bd663a059963d5ea037cdc63ae831fd7e73be8d406d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:18.424267 containerd[1566]: time="2025-09-09T00:37:18.424199696Z" level=error msg="Failed to destroy network for sandbox \"f0a3486442ae35a715760735012be2f1cb9225b8249723c08fb992c6f12791af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:18.426182 kubelet[2733]: E0909 00:37:18.425654 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6752c694b2f8bbecf3344bd663a059963d5ea037cdc63ae831fd7e73be8d406d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:18.426182 kubelet[2733]: E0909 00:37:18.425653 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f1a98f381bce89b206cc5d8aec945cdbb8acbd647085c4e493ea82feff2b767\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:18.426182 kubelet[2733]: E0909 00:37:18.425777 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6752c694b2f8bbecf3344bd663a059963d5ea037cdc63ae831fd7e73be8d406d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-p62qs" Sep 9 00:37:18.426182 kubelet[2733]: E0909 00:37:18.425806 2733 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6752c694b2f8bbecf3344bd663a059963d5ea037cdc63ae831fd7e73be8d406d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-p62qs" Sep 9 00:37:18.427038 containerd[1566]: time="2025-09-09T00:37:18.426098278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bfkfk,Uid:bfb4a09d-bdfb-4efb-a83c-d7bb472ba089,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0a3486442ae35a715760735012be2f1cb9225b8249723c08fb992c6f12791af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:18.427116 kubelet[2733]: E0909 00:37:18.425822 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f1a98f381bce89b206cc5d8aec945cdbb8acbd647085c4e493ea82feff2b767\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fc7d657c6-nwxlc" Sep 9 00:37:18.427116 kubelet[2733]: E0909 00:37:18.425868 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-p62qs_kube-system(05b853f1-12c7-471b-853c-d97bde5dec17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-p62qs_kube-system(05b853f1-12c7-471b-853c-d97bde5dec17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6752c694b2f8bbecf3344bd663a059963d5ea037cdc63ae831fd7e73be8d406d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-p62qs" podUID="05b853f1-12c7-471b-853c-d97bde5dec17" Sep 9 00:37:18.427116 kubelet[2733]: E0909 00:37:18.425899 2733 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f1a98f381bce89b206cc5d8aec945cdbb8acbd647085c4e493ea82feff2b767\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fc7d657c6-nwxlc" Sep 9 00:37:18.427210 kubelet[2733]: E0909 00:37:18.425941 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fc7d657c6-nwxlc_calico-system(ca1b3cef-013c-458e-9dbd-c26a121d5707)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fc7d657c6-nwxlc_calico-system(ca1b3cef-013c-458e-9dbd-c26a121d5707)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f1a98f381bce89b206cc5d8aec945cdbb8acbd647085c4e493ea82feff2b767\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fc7d657c6-nwxlc" podUID="ca1b3cef-013c-458e-9dbd-c26a121d5707" Sep 9 00:37:18.427210 kubelet[2733]: E0909 00:37:18.426722 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0a3486442ae35a715760735012be2f1cb9225b8249723c08fb992c6f12791af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:18.427210 kubelet[2733]: E0909 00:37:18.426754 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0a3486442ae35a715760735012be2f1cb9225b8249723c08fb992c6f12791af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bfkfk" Sep 9 00:37:18.427293 kubelet[2733]: E0909 00:37:18.426769 2733 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0a3486442ae35a715760735012be2f1cb9225b8249723c08fb992c6f12791af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bfkfk" Sep 9 00:37:18.427293 kubelet[2733]: E0909 00:37:18.426811 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-bfkfk_kube-system(bfb4a09d-bdfb-4efb-a83c-d7bb472ba089)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-bfkfk_kube-system(bfb4a09d-bdfb-4efb-a83c-d7bb472ba089)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0a3486442ae35a715760735012be2f1cb9225b8249723c08fb992c6f12791af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-bfkfk" podUID="bfb4a09d-bdfb-4efb-a83c-d7bb472ba089" Sep 9 00:37:18.428633 containerd[1566]: time="2025-09-09T00:37:18.428602738Z" level=error msg="Failed to destroy network for sandbox \"a2594445bba5be31ebdf54be6e36eadd660f7dfad6716944a86b35ce6112065e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:18.429171 systemd[1]: run-netns-cni\x2d81343d9c\x2daa96\x2d28f3\x2d5be8\x2dd3926b1e788d.mount: Deactivated successfully. Sep 9 00:37:18.431706 systemd[1]: run-netns-cni\x2d25385ba1\x2da2c2\x2d5c72\x2d3e83\x2d8e2a20b75371.mount: Deactivated successfully. Sep 9 00:37:18.435139 containerd[1566]: time="2025-09-09T00:37:18.435084341Z" level=error msg="Failed to destroy network for sandbox \"6a816cdc2ecf28d011f32949f805e2ef8ae8df184114fac3934d40658159b671\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:18.437738 containerd[1566]: time="2025-09-09T00:37:18.437668240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5946df5ddf-wf8gv,Uid:d5247415-ec51-4af2-9c8d-7db9f264eca7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2594445bba5be31ebdf54be6e36eadd660f7dfad6716944a86b35ce6112065e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:18.445947 kubelet[2733]: E0909 00:37:18.443858 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2594445bba5be31ebdf54be6e36eadd660f7dfad6716944a86b35ce6112065e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:18.446199 kubelet[2733]: E0909 00:37:18.445953 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2594445bba5be31ebdf54be6e36eadd660f7dfad6716944a86b35ce6112065e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5946df5ddf-wf8gv" Sep 9 00:37:18.446199 kubelet[2733]: E0909 00:37:18.445985 2733 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2594445bba5be31ebdf54be6e36eadd660f7dfad6716944a86b35ce6112065e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5946df5ddf-wf8gv" Sep 9 00:37:18.446199 kubelet[2733]: E0909 00:37:18.446033 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5946df5ddf-wf8gv_calico-system(d5247415-ec51-4af2-9c8d-7db9f264eca7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5946df5ddf-wf8gv_calico-system(d5247415-ec51-4af2-9c8d-7db9f264eca7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2594445bba5be31ebdf54be6e36eadd660f7dfad6716944a86b35ce6112065e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5946df5ddf-wf8gv" podUID="d5247415-ec51-4af2-9c8d-7db9f264eca7" Sep 9 00:37:18.447840 containerd[1566]: time="2025-09-09T00:37:18.446897139Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nktsk,Uid:dffc82f8-bf4b-4a16-a1a2-4a73ee928eb9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a816cdc2ecf28d011f32949f805e2ef8ae8df184114fac3934d40658159b671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:18.447926 kubelet[2733]: E0909 00:37:18.447849 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a816cdc2ecf28d011f32949f805e2ef8ae8df184114fac3934d40658159b671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:18.447926 kubelet[2733]: E0909 00:37:18.447892 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a816cdc2ecf28d011f32949f805e2ef8ae8df184114fac3934d40658159b671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-nktsk" Sep 9 00:37:18.447926 kubelet[2733]: E0909 00:37:18.447909 2733 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a816cdc2ecf28d011f32949f805e2ef8ae8df184114fac3934d40658159b671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-nktsk" Sep 9 00:37:18.448009 kubelet[2733]: E0909 00:37:18.447933 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-nktsk_calico-system(dffc82f8-bf4b-4a16-a1a2-4a73ee928eb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-nktsk_calico-system(dffc82f8-bf4b-4a16-a1a2-4a73ee928eb9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a816cdc2ecf28d011f32949f805e2ef8ae8df184114fac3934d40658159b671\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-nktsk" podUID="dffc82f8-bf4b-4a16-a1a2-4a73ee928eb9" Sep 9 00:37:18.450705 containerd[1566]: time="2025-09-09T00:37:18.450654460Z" level=error msg="Failed to destroy network for sandbox \"530125bfe81e0890d5a2e82166acda988ac939bac78c52a015f80c13a411e782\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:18.450914 containerd[1566]: time="2025-09-09T00:37:18.450667164Z" level=error msg="Failed to destroy network for sandbox \"4c611e36b23fad18dabfaf54cacf1d19c95837266035fb6e5393a6f40713d673\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:18.453614 containerd[1566]: time="2025-09-09T00:37:18.453579529Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df6bdd7ff-rmz9h,Uid:0cd1dd39-fe2c-45f4-8309-3b93ea396e71,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"530125bfe81e0890d5a2e82166acda988ac939bac78c52a015f80c13a411e782\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:18.454092 kubelet[2733]: E0909 00:37:18.453707 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"530125bfe81e0890d5a2e82166acda988ac939bac78c52a015f80c13a411e782\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:18.454092 kubelet[2733]: E0909 00:37:18.453736 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"530125bfe81e0890d5a2e82166acda988ac939bac78c52a015f80c13a411e782\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7df6bdd7ff-rmz9h" Sep 9 00:37:18.454092 kubelet[2733]: E0909 00:37:18.453755 2733 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"530125bfe81e0890d5a2e82166acda988ac939bac78c52a015f80c13a411e782\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7df6bdd7ff-rmz9h" Sep 9 00:37:18.454187 kubelet[2733]: E0909 00:37:18.453800 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7df6bdd7ff-rmz9h_calico-apiserver(0cd1dd39-fe2c-45f4-8309-3b93ea396e71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7df6bdd7ff-rmz9h_calico-apiserver(0cd1dd39-fe2c-45f4-8309-3b93ea396e71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"530125bfe81e0890d5a2e82166acda988ac939bac78c52a015f80c13a411e782\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7df6bdd7ff-rmz9h" podUID="0cd1dd39-fe2c-45f4-8309-3b93ea396e71" Sep 9 00:37:18.454666 containerd[1566]: time="2025-09-09T00:37:18.454607178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df6bdd7ff-ldv9n,Uid:5eabb735-48c0-4991-a154-a56147deb87c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c611e36b23fad18dabfaf54cacf1d19c95837266035fb6e5393a6f40713d673\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:18.454990 kubelet[2733]: E0909 00:37:18.454951 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c611e36b23fad18dabfaf54cacf1d19c95837266035fb6e5393a6f40713d673\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:18.454990 kubelet[2733]: E0909 00:37:18.454981 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c611e36b23fad18dabfaf54cacf1d19c95837266035fb6e5393a6f40713d673\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7df6bdd7ff-ldv9n" Sep 9 00:37:18.454990 kubelet[2733]: E0909 00:37:18.454997 2733 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c611e36b23fad18dabfaf54cacf1d19c95837266035fb6e5393a6f40713d673\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7df6bdd7ff-ldv9n" Sep 9 00:37:18.455203 kubelet[2733]: E0909 00:37:18.455024 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7df6bdd7ff-ldv9n_calico-apiserver(5eabb735-48c0-4991-a154-a56147deb87c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7df6bdd7ff-ldv9n_calico-apiserver(5eabb735-48c0-4991-a154-a56147deb87c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c611e36b23fad18dabfaf54cacf1d19c95837266035fb6e5393a6f40713d673\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7df6bdd7ff-ldv9n" podUID="5eabb735-48c0-4991-a154-a56147deb87c" Sep 9 00:37:19.271978 systemd[1]: Created slice kubepods-besteffort-pode2e3f318_b326_4ebf_beea_35cea16bcc19.slice - libcontainer container kubepods-besteffort-pode2e3f318_b326_4ebf_beea_35cea16bcc19.slice. Sep 9 00:37:19.274651 containerd[1566]: time="2025-09-09T00:37:19.274606394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bdgfl,Uid:e2e3f318-b326-4ebf-beea-35cea16bcc19,Namespace:calico-system,Attempt:0,}" Sep 9 00:37:19.320236 systemd[1]: run-netns-cni\x2d434310c1\x2d5780\x2d2f8c\x2d8a7c\x2d26b513e54133.mount: Deactivated successfully. Sep 9 00:37:19.320391 systemd[1]: run-netns-cni\x2d80e3d54c\x2d42d0\x2d9b4e\x2d9438\x2dfe11fa4a4ebb.mount: Deactivated successfully. Sep 9 00:37:19.320482 systemd[1]: run-netns-cni\x2d6a63b40a\x2d013f\x2db461\x2d50f0\x2da6f250137e34.mount: Deactivated successfully. Sep 9 00:37:19.636608 containerd[1566]: time="2025-09-09T00:37:19.636313893Z" level=error msg="Failed to destroy network for sandbox \"6540a032f0f6f2aa0092f067e8564363473fc538df5927841a26a44ac1c72283\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:19.638951 containerd[1566]: time="2025-09-09T00:37:19.638783768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bdgfl,Uid:e2e3f318-b326-4ebf-beea-35cea16bcc19,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6540a032f0f6f2aa0092f067e8564363473fc538df5927841a26a44ac1c72283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:19.639811 systemd[1]: run-netns-cni\x2d7201f4a0\x2d36b7\x2d7e1b\x2d7c75\x2d3d932df398a9.mount: Deactivated successfully. Sep 9 00:37:19.640172 kubelet[2733]: E0909 00:37:19.640038 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6540a032f0f6f2aa0092f067e8564363473fc538df5927841a26a44ac1c72283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:19.640172 kubelet[2733]: E0909 00:37:19.640129 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6540a032f0f6f2aa0092f067e8564363473fc538df5927841a26a44ac1c72283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bdgfl" Sep 9 00:37:19.640172 kubelet[2733]: E0909 00:37:19.640152 2733 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6540a032f0f6f2aa0092f067e8564363473fc538df5927841a26a44ac1c72283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bdgfl" Sep 9 00:37:19.640513 kubelet[2733]: E0909 00:37:19.640207 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bdgfl_calico-system(e2e3f318-b326-4ebf-beea-35cea16bcc19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bdgfl_calico-system(e2e3f318-b326-4ebf-beea-35cea16bcc19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6540a032f0f6f2aa0092f067e8564363473fc538df5927841a26a44ac1c72283\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bdgfl" podUID="e2e3f318-b326-4ebf-beea-35cea16bcc19" Sep 9 00:37:26.672395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount727640092.mount: Deactivated successfully. Sep 9 00:37:29.266513 containerd[1566]: time="2025-09-09T00:37:29.266418731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df6bdd7ff-ldv9n,Uid:5eabb735-48c0-4991-a154-a56147deb87c,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:37:29.267969 kubelet[2733]: E0909 00:37:29.267266 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:29.269130 containerd[1566]: time="2025-09-09T00:37:29.268771941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bfkfk,Uid:bfb4a09d-bdfb-4efb-a83c-d7bb472ba089,Namespace:kube-system,Attempt:0,}" Sep 9 00:37:30.012691 kubelet[2733]: I0909 00:37:30.012587 2733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:37:30.013296 kubelet[2733]: E0909 00:37:30.013242 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:30.266246 containerd[1566]: time="2025-09-09T00:37:30.266104376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df6bdd7ff-rmz9h,Uid:0cd1dd39-fe2c-45f4-8309-3b93ea396e71,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:37:30.408937 kubelet[2733]: E0909 00:37:30.408904 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:30.905921 containerd[1566]: time="2025-09-09T00:37:30.904265950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:30.936222 containerd[1566]: time="2025-09-09T00:37:30.936162408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 9 00:37:30.964336 containerd[1566]: time="2025-09-09T00:37:30.964254614Z" level=error msg="Failed to destroy network for sandbox \"aa0fa38380f58d2fe024fef51aec2582aed4a0bdb0df798cfb07596bc1768c93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:30.968142 systemd[1]: run-netns-cni\x2d013cceb6\x2d0a47\x2d2070\x2d41df\x2d843fae2301a8.mount: Deactivated successfully. Sep 9 00:37:30.969054 containerd[1566]: time="2025-09-09T00:37:30.969006063Z" level=error msg="Failed to destroy network for sandbox \"eba6f19ac2500bd9fb4dccfb0dddaf2ec64f83c6baaf5ff87a55a5e3a8cc4446\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:30.972184 systemd[1]: run-netns-cni\x2d6b1017ea\x2db1dc\x2d74bf\x2d960c\x2d65054d2e3b24.mount: Deactivated successfully. Sep 9 00:37:30.981006 containerd[1566]: time="2025-09-09T00:37:30.980944287Z" level=error msg="Failed to destroy network for sandbox \"ec574d1a34de605a37435f7e994ac1e817ed3a1f2bea83f1ce919487fca862d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:31.019687 containerd[1566]: time="2025-09-09T00:37:31.019628995Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:31.128744 containerd[1566]: time="2025-09-09T00:37:31.128668512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df6bdd7ff-ldv9n,Uid:5eabb735-48c0-4991-a154-a56147deb87c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa0fa38380f58d2fe024fef51aec2582aed4a0bdb0df798cfb07596bc1768c93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:31.129047 kubelet[2733]: E0909 00:37:31.128944 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa0fa38380f58d2fe024fef51aec2582aed4a0bdb0df798cfb07596bc1768c93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:31.129047 kubelet[2733]: E0909 00:37:31.129023 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa0fa38380f58d2fe024fef51aec2582aed4a0bdb0df798cfb07596bc1768c93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7df6bdd7ff-ldv9n" Sep 9 00:37:31.129133 kubelet[2733]: E0909 00:37:31.129047 2733 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa0fa38380f58d2fe024fef51aec2582aed4a0bdb0df798cfb07596bc1768c93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7df6bdd7ff-ldv9n" Sep 9 00:37:31.129133 kubelet[2733]: E0909 00:37:31.129102 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7df6bdd7ff-ldv9n_calico-apiserver(5eabb735-48c0-4991-a154-a56147deb87c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7df6bdd7ff-ldv9n_calico-apiserver(5eabb735-48c0-4991-a154-a56147deb87c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa0fa38380f58d2fe024fef51aec2582aed4a0bdb0df798cfb07596bc1768c93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7df6bdd7ff-ldv9n" podUID="5eabb735-48c0-4991-a154-a56147deb87c" Sep 9 00:37:31.199377 containerd[1566]: time="2025-09-09T00:37:31.199205399Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bfkfk,Uid:bfb4a09d-bdfb-4efb-a83c-d7bb472ba089,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba6f19ac2500bd9fb4dccfb0dddaf2ec64f83c6baaf5ff87a55a5e3a8cc4446\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:31.199570 kubelet[2733]: E0909 00:37:31.199438 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba6f19ac2500bd9fb4dccfb0dddaf2ec64f83c6baaf5ff87a55a5e3a8cc4446\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:31.199621 kubelet[2733]: E0909 00:37:31.199586 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba6f19ac2500bd9fb4dccfb0dddaf2ec64f83c6baaf5ff87a55a5e3a8cc4446\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bfkfk" Sep 9 00:37:31.199621 kubelet[2733]: E0909 00:37:31.199605 2733 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba6f19ac2500bd9fb4dccfb0dddaf2ec64f83c6baaf5ff87a55a5e3a8cc4446\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bfkfk" Sep 9 00:37:31.199679 kubelet[2733]: E0909 00:37:31.199646 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-bfkfk_kube-system(bfb4a09d-bdfb-4efb-a83c-d7bb472ba089)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-bfkfk_kube-system(bfb4a09d-bdfb-4efb-a83c-d7bb472ba089)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eba6f19ac2500bd9fb4dccfb0dddaf2ec64f83c6baaf5ff87a55a5e3a8cc4446\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-bfkfk" podUID="bfb4a09d-bdfb-4efb-a83c-d7bb472ba089" Sep 9 00:37:31.265828 kubelet[2733]: E0909 00:37:31.265761 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:31.269364 containerd[1566]: time="2025-09-09T00:37:31.269326356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bdgfl,Uid:e2e3f318-b326-4ebf-beea-35cea16bcc19,Namespace:calico-system,Attempt:0,}" Sep 9 00:37:31.286996 containerd[1566]: time="2025-09-09T00:37:31.286915458Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df6bdd7ff-rmz9h,Uid:0cd1dd39-fe2c-45f4-8309-3b93ea396e71,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec574d1a34de605a37435f7e994ac1e817ed3a1f2bea83f1ce919487fca862d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:31.287392 kubelet[2733]: E0909 00:37:31.287343 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec574d1a34de605a37435f7e994ac1e817ed3a1f2bea83f1ce919487fca862d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:31.287507 kubelet[2733]: E0909 00:37:31.287407 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec574d1a34de605a37435f7e994ac1e817ed3a1f2bea83f1ce919487fca862d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7df6bdd7ff-rmz9h" Sep 9 00:37:31.287507 kubelet[2733]: E0909 00:37:31.287433 2733 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec574d1a34de605a37435f7e994ac1e817ed3a1f2bea83f1ce919487fca862d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7df6bdd7ff-rmz9h" Sep 9 00:37:31.287581 kubelet[2733]: E0909 00:37:31.287493 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7df6bdd7ff-rmz9h_calico-apiserver(0cd1dd39-fe2c-45f4-8309-3b93ea396e71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7df6bdd7ff-rmz9h_calico-apiserver(0cd1dd39-fe2c-45f4-8309-3b93ea396e71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec574d1a34de605a37435f7e994ac1e817ed3a1f2bea83f1ce919487fca862d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7df6bdd7ff-rmz9h" podUID="0cd1dd39-fe2c-45f4-8309-3b93ea396e71" Sep 9 00:37:31.289284 containerd[1566]: time="2025-09-09T00:37:31.289250736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p62qs,Uid:05b853f1-12c7-471b-853c-d97bde5dec17,Namespace:kube-system,Attempt:0,}" Sep 9 00:37:31.504012 containerd[1566]: time="2025-09-09T00:37:31.503828215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:31.504855 containerd[1566]: time="2025-09-09T00:37:31.504328710Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 13.129188887s" Sep 9 00:37:31.504855 containerd[1566]: time="2025-09-09T00:37:31.504363476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 9 00:37:31.520067 containerd[1566]: time="2025-09-09T00:37:31.520017491Z" level=info msg="CreateContainer within sandbox \"c373dfcce3c301a449dbbb0e3819ea1982404e85e7e8fb23fd8424bb03a8583b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 00:37:31.668743 systemd[1]: run-netns-cni\x2dd4c3476f\x2de263\x2da914\x2d1502\x2d561e34392c95.mount: Deactivated successfully. Sep 9 00:37:31.815009 containerd[1566]: time="2025-09-09T00:37:31.814935954Z" level=error msg="Failed to destroy network for sandbox \"85f4ed49a98fd0293e295aefd21a19ced1c1da2fdc3af09af708e63f49747541\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:31.817548 systemd[1]: run-netns-cni\x2d727ecb6f\x2d26b1\x2d6610\x2db6f2\x2de7b52a113a16.mount: Deactivated successfully. Sep 9 00:37:31.935111 containerd[1566]: time="2025-09-09T00:37:31.935001310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bdgfl,Uid:e2e3f318-b326-4ebf-beea-35cea16bcc19,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f4ed49a98fd0293e295aefd21a19ced1c1da2fdc3af09af708e63f49747541\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:31.935869 kubelet[2733]: E0909 00:37:31.935318 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f4ed49a98fd0293e295aefd21a19ced1c1da2fdc3af09af708e63f49747541\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:31.935869 kubelet[2733]: E0909 00:37:31.935386 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f4ed49a98fd0293e295aefd21a19ced1c1da2fdc3af09af708e63f49747541\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bdgfl" Sep 9 00:37:31.935869 kubelet[2733]: E0909 00:37:31.935405 2733 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f4ed49a98fd0293e295aefd21a19ced1c1da2fdc3af09af708e63f49747541\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bdgfl" Sep 9 00:37:31.936317 kubelet[2733]: E0909 00:37:31.935456 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bdgfl_calico-system(e2e3f318-b326-4ebf-beea-35cea16bcc19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bdgfl_calico-system(e2e3f318-b326-4ebf-beea-35cea16bcc19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85f4ed49a98fd0293e295aefd21a19ced1c1da2fdc3af09af708e63f49747541\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bdgfl" podUID="e2e3f318-b326-4ebf-beea-35cea16bcc19" Sep 9 00:37:32.120506 containerd[1566]: time="2025-09-09T00:37:32.120287394Z" level=error msg="Failed to destroy network for sandbox \"5bee9a4e6b44169ff64b1cf7333d170ff084c8f62e11d4e5d21c6c0ccb0d1d4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:32.122757 systemd[1]: run-netns-cni\x2df60c06dc\x2ddd83\x2d5083\x2d1c08\x2dfd7bb96ffbf1.mount: Deactivated successfully. Sep 9 00:37:32.193207 containerd[1566]: time="2025-09-09T00:37:32.193124293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p62qs,Uid:05b853f1-12c7-471b-853c-d97bde5dec17,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bee9a4e6b44169ff64b1cf7333d170ff084c8f62e11d4e5d21c6c0ccb0d1d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:32.193471 kubelet[2733]: E0909 00:37:32.193422 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bee9a4e6b44169ff64b1cf7333d170ff084c8f62e11d4e5d21c6c0ccb0d1d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:37:32.193552 kubelet[2733]: E0909 00:37:32.193499 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bee9a4e6b44169ff64b1cf7333d170ff084c8f62e11d4e5d21c6c0ccb0d1d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-p62qs" Sep 9 00:37:32.193552 kubelet[2733]: E0909 00:37:32.193523 2733 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bee9a4e6b44169ff64b1cf7333d170ff084c8f62e11d4e5d21c6c0ccb0d1d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-p62qs" Sep 9 00:37:32.193642 kubelet[2733]: E0909 00:37:32.193568 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-p62qs_kube-system(05b853f1-12c7-471b-853c-d97bde5dec17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-p62qs_kube-system(05b853f1-12c7-471b-853c-d97bde5dec17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5bee9a4e6b44169ff64b1cf7333d170ff084c8f62e11d4e5d21c6c0ccb0d1d4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-p62qs" podUID="05b853f1-12c7-471b-853c-d97bde5dec17" Sep 9 00:37:32.264816 systemd[1]: Started sshd@7-10.0.0.5:22-10.0.0.1:36878.service - OpenSSH per-connection server daemon (10.0.0.1:36878). Sep 9 00:37:32.295896 containerd[1566]: time="2025-09-09T00:37:32.295676192Z" level=info msg="Container a581755cfc917a1c026b2467e5b670fefda39565f145b0a6f3643732e34737a3: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:37:32.410337 sshd[3868]: Accepted publickey for core from 10.0.0.1 port 36878 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:37:32.412099 sshd-session[3868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:32.417128 systemd-logind[1550]: New session 8 of user core. Sep 9 00:37:32.432089 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:37:32.608851 containerd[1566]: time="2025-09-09T00:37:32.608779548Z" level=info msg="CreateContainer within sandbox \"c373dfcce3c301a449dbbb0e3819ea1982404e85e7e8fb23fd8424bb03a8583b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a581755cfc917a1c026b2467e5b670fefda39565f145b0a6f3643732e34737a3\"" Sep 9 00:37:32.610532 sshd[3870]: Connection closed by 10.0.0.1 port 36878 Sep 9 00:37:32.610949 sshd-session[3868]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:32.613509 containerd[1566]: time="2025-09-09T00:37:32.613459937Z" level=info msg="StartContainer for \"a581755cfc917a1c026b2467e5b670fefda39565f145b0a6f3643732e34737a3\"" Sep 9 00:37:32.615938 containerd[1566]: time="2025-09-09T00:37:32.615258950Z" level=info msg="connecting to shim a581755cfc917a1c026b2467e5b670fefda39565f145b0a6f3643732e34737a3" address="unix:///run/containerd/s/89f60f8927b513c580108db30f261735f5d2523d828b3b59af59c91158f81ce5" protocol=ttrpc version=3 Sep 9 00:37:32.618601 systemd[1]: sshd@7-10.0.0.5:22-10.0.0.1:36878.service: Deactivated successfully. Sep 9 00:37:32.621719 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:37:32.625010 systemd-logind[1550]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:37:32.627084 systemd-logind[1550]: Removed session 8. Sep 9 00:37:32.705238 systemd[1]: Started cri-containerd-a581755cfc917a1c026b2467e5b670fefda39565f145b0a6f3643732e34737a3.scope - libcontainer container a581755cfc917a1c026b2467e5b670fefda39565f145b0a6f3643732e34737a3. Sep 9 00:37:32.807978 containerd[1566]: time="2025-09-09T00:37:32.807919806Z" level=info msg="StartContainer for \"a581755cfc917a1c026b2467e5b670fefda39565f145b0a6f3643732e34737a3\" returns successfully" Sep 9 00:37:32.900453 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 00:37:32.900634 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 00:37:33.190851 kubelet[2733]: I0909 00:37:33.190791 2733 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d5247415-ec51-4af2-9c8d-7db9f264eca7-whisker-backend-key-pair\") pod \"d5247415-ec51-4af2-9c8d-7db9f264eca7\" (UID: \"d5247415-ec51-4af2-9c8d-7db9f264eca7\") " Sep 9 00:37:33.190851 kubelet[2733]: I0909 00:37:33.190848 2733 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ptwr\" (UniqueName: \"kubernetes.io/projected/d5247415-ec51-4af2-9c8d-7db9f264eca7-kube-api-access-4ptwr\") pod \"d5247415-ec51-4af2-9c8d-7db9f264eca7\" (UID: \"d5247415-ec51-4af2-9c8d-7db9f264eca7\") " Sep 9 00:37:33.191489 kubelet[2733]: I0909 00:37:33.190898 2733 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5247415-ec51-4af2-9c8d-7db9f264eca7-whisker-ca-bundle\") pod \"d5247415-ec51-4af2-9c8d-7db9f264eca7\" (UID: \"d5247415-ec51-4af2-9c8d-7db9f264eca7\") " Sep 9 00:37:33.191489 kubelet[2733]: I0909 00:37:33.191455 2733 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5247415-ec51-4af2-9c8d-7db9f264eca7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d5247415-ec51-4af2-9c8d-7db9f264eca7" (UID: "d5247415-ec51-4af2-9c8d-7db9f264eca7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 00:37:33.198908 kubelet[2733]: I0909 00:37:33.196686 2733 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5247415-ec51-4af2-9c8d-7db9f264eca7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d5247415-ec51-4af2-9c8d-7db9f264eca7" (UID: "d5247415-ec51-4af2-9c8d-7db9f264eca7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 00:37:33.198830 systemd[1]: var-lib-kubelet-pods-d5247415\x2dec51\x2d4af2\x2d9c8d\x2d7db9f264eca7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 00:37:33.200034 kubelet[2733]: I0909 00:37:33.199985 2733 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5247415-ec51-4af2-9c8d-7db9f264eca7-kube-api-access-4ptwr" (OuterVolumeSpecName: "kube-api-access-4ptwr") pod "d5247415-ec51-4af2-9c8d-7db9f264eca7" (UID: "d5247415-ec51-4af2-9c8d-7db9f264eca7"). InnerVolumeSpecName "kube-api-access-4ptwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:37:33.205075 systemd[1]: var-lib-kubelet-pods-d5247415\x2dec51\x2d4af2\x2d9c8d\x2d7db9f264eca7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4ptwr.mount: Deactivated successfully. Sep 9 00:37:33.266484 containerd[1566]: time="2025-09-09T00:37:33.266134497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nktsk,Uid:dffc82f8-bf4b-4a16-a1a2-4a73ee928eb9,Namespace:calico-system,Attempt:0,}" Sep 9 00:37:33.291674 kubelet[2733]: I0909 00:37:33.291601 2733 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d5247415-ec51-4af2-9c8d-7db9f264eca7-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:33.291674 kubelet[2733]: I0909 00:37:33.291655 2733 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ptwr\" (UniqueName: \"kubernetes.io/projected/d5247415-ec51-4af2-9c8d-7db9f264eca7-kube-api-access-4ptwr\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:33.291674 kubelet[2733]: I0909 00:37:33.291668 2733 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5247415-ec51-4af2-9c8d-7db9f264eca7-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:33.440616 systemd[1]: Removed slice kubepods-besteffort-podd5247415_ec51_4af2_9c8d_7db9f264eca7.slice - libcontainer container kubepods-besteffort-podd5247415_ec51_4af2_9c8d_7db9f264eca7.slice. Sep 9 00:37:33.567611 containerd[1566]: time="2025-09-09T00:37:33.567566015Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a581755cfc917a1c026b2467e5b670fefda39565f145b0a6f3643732e34737a3\" id:\"ca886aea0c0de7ec051e77b112910a4c1bccd29dbb5e89f5c373e3444a353dc3\" pid:3984 exit_status:1 exited_at:{seconds:1757378253 nanos:567079299}" Sep 9 00:37:33.724817 kubelet[2733]: I0909 00:37:33.723770 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-q89l8" podStartSLOduration=3.590464654 podStartE2EDuration="26.723753966s" podCreationTimestamp="2025-09-09 00:37:07 +0000 UTC" firstStartedPulling="2025-09-09 00:37:08.372796301 +0000 UTC m=+20.203678503" lastFinishedPulling="2025-09-09 00:37:31.506085613 +0000 UTC m=+43.336967815" observedRunningTime="2025-09-09 00:37:33.723663341 +0000 UTC m=+45.554545543" watchObservedRunningTime="2025-09-09 00:37:33.723753966 +0000 UTC m=+45.554636158" Sep 9 00:37:33.959994 systemd-networkd[1488]: cali2f1a0ee69af: Link UP Sep 9 00:37:33.962146 systemd-networkd[1488]: cali2f1a0ee69af: Gained carrier Sep 9 00:37:33.990596 containerd[1566]: 2025-09-09 00:37:33.290 [INFO][3949] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:37:33.990596 containerd[1566]: 2025-09-09 00:37:33.312 [INFO][3949] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--nktsk-eth0 goldmane-7988f88666- calico-system dffc82f8-bf4b-4a16-a1a2-4a73ee928eb9 871 0 2025-09-09 00:37:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-nktsk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2f1a0ee69af [] [] }} ContainerID="24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4" Namespace="calico-system" Pod="goldmane-7988f88666-nktsk" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--nktsk-" Sep 9 00:37:33.990596 containerd[1566]: 2025-09-09 00:37:33.313 [INFO][3949] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4" Namespace="calico-system" Pod="goldmane-7988f88666-nktsk" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--nktsk-eth0" Sep 9 00:37:33.990596 containerd[1566]: 2025-09-09 00:37:33.429 [INFO][3964] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4" HandleID="k8s-pod-network.24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4" Workload="localhost-k8s-goldmane--7988f88666--nktsk-eth0" Sep 9 00:37:33.990954 containerd[1566]: 2025-09-09 00:37:33.429 [INFO][3964] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4" HandleID="k8s-pod-network.24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4" Workload="localhost-k8s-goldmane--7988f88666--nktsk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00012fad0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-nktsk", "timestamp":"2025-09-09 00:37:33.429007618 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:37:33.990954 containerd[1566]: 2025-09-09 00:37:33.429 [INFO][3964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:33.990954 containerd[1566]: 2025-09-09 00:37:33.429 [INFO][3964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:33.990954 containerd[1566]: 2025-09-09 00:37:33.429 [INFO][3964] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:37:33.990954 containerd[1566]: 2025-09-09 00:37:33.552 [INFO][3964] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4" host="localhost" Sep 9 00:37:33.990954 containerd[1566]: 2025-09-09 00:37:33.561 [INFO][3964] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:37:33.990954 containerd[1566]: 2025-09-09 00:37:33.570 [INFO][3964] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:37:33.990954 containerd[1566]: 2025-09-09 00:37:33.572 [INFO][3964] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:33.990954 containerd[1566]: 2025-09-09 00:37:33.576 [INFO][3964] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:33.990954 containerd[1566]: 2025-09-09 00:37:33.576 [INFO][3964] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4" host="localhost" Sep 9 00:37:33.991389 containerd[1566]: 2025-09-09 00:37:33.722 [INFO][3964] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4 Sep 9 00:37:33.991389 containerd[1566]: 2025-09-09 00:37:33.821 [INFO][3964] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4" host="localhost" Sep 9 00:37:33.991389 containerd[1566]: 2025-09-09 00:37:33.939 [INFO][3964] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4" host="localhost" Sep 9 00:37:33.991389 containerd[1566]: 2025-09-09 00:37:33.939 [INFO][3964] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4" host="localhost" Sep 9 00:37:33.991389 containerd[1566]: 2025-09-09 00:37:33.939 [INFO][3964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:33.991389 containerd[1566]: 2025-09-09 00:37:33.939 [INFO][3964] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4" HandleID="k8s-pod-network.24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4" Workload="localhost-k8s-goldmane--7988f88666--nktsk-eth0" Sep 9 00:37:33.991573 containerd[1566]: 2025-09-09 00:37:33.944 [INFO][3949] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4" Namespace="calico-system" Pod="goldmane-7988f88666-nktsk" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--nktsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--nktsk-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"dffc82f8-bf4b-4a16-a1a2-4a73ee928eb9", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 37, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-nktsk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2f1a0ee69af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:33.991573 containerd[1566]: 2025-09-09 00:37:33.944 [INFO][3949] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4" Namespace="calico-system" Pod="goldmane-7988f88666-nktsk" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--nktsk-eth0" Sep 9 00:37:33.991683 containerd[1566]: 2025-09-09 00:37:33.944 [INFO][3949] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f1a0ee69af ContainerID="24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4" Namespace="calico-system" Pod="goldmane-7988f88666-nktsk" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--nktsk-eth0" Sep 9 00:37:33.991683 containerd[1566]: 2025-09-09 00:37:33.968 [INFO][3949] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4" Namespace="calico-system" Pod="goldmane-7988f88666-nktsk" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--nktsk-eth0" Sep 9 00:37:33.991742 containerd[1566]: 2025-09-09 00:37:33.968 [INFO][3949] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4" Namespace="calico-system" Pod="goldmane-7988f88666-nktsk" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--nktsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--nktsk-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"dffc82f8-bf4b-4a16-a1a2-4a73ee928eb9", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 37, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4", Pod:"goldmane-7988f88666-nktsk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2f1a0ee69af", MAC:"5e:7d:f1:65:7b:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:33.991807 containerd[1566]: 2025-09-09 00:37:33.980 [INFO][3949] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4" Namespace="calico-system" Pod="goldmane-7988f88666-nktsk" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--nktsk-eth0" Sep 9 00:37:34.002451 systemd[1]: Created slice kubepods-besteffort-pode92bf4e5_5efe_42d1_9794_23cab49eda09.slice - libcontainer container kubepods-besteffort-pode92bf4e5_5efe_42d1_9794_23cab49eda09.slice. Sep 9 00:37:34.098260 kubelet[2733]: I0909 00:37:34.098169 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e92bf4e5-5efe-42d1-9794-23cab49eda09-whisker-backend-key-pair\") pod \"whisker-645556b648-5gjs6\" (UID: \"e92bf4e5-5efe-42d1-9794-23cab49eda09\") " pod="calico-system/whisker-645556b648-5gjs6" Sep 9 00:37:34.098260 kubelet[2733]: I0909 00:37:34.098244 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxchk\" (UniqueName: \"kubernetes.io/projected/e92bf4e5-5efe-42d1-9794-23cab49eda09-kube-api-access-kxchk\") pod \"whisker-645556b648-5gjs6\" (UID: \"e92bf4e5-5efe-42d1-9794-23cab49eda09\") " pod="calico-system/whisker-645556b648-5gjs6" Sep 9 00:37:34.098260 kubelet[2733]: I0909 00:37:34.098272 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e92bf4e5-5efe-42d1-9794-23cab49eda09-whisker-ca-bundle\") pod \"whisker-645556b648-5gjs6\" (UID: \"e92bf4e5-5efe-42d1-9794-23cab49eda09\") " pod="calico-system/whisker-645556b648-5gjs6" Sep 9 00:37:34.224954 containerd[1566]: time="2025-09-09T00:37:34.224747876Z" level=info msg="connecting to shim 24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4" address="unix:///run/containerd/s/cab4add1ae2b315445d59ab39c8e2fd88da93357819da9291ede4c8dd2f79068" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:37:34.252190 systemd[1]: Started cri-containerd-24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4.scope - libcontainer container 24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4. Sep 9 00:37:34.271412 containerd[1566]: time="2025-09-09T00:37:34.271044796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fc7d657c6-nwxlc,Uid:ca1b3cef-013c-458e-9dbd-c26a121d5707,Namespace:calico-system,Attempt:0,}" Sep 9 00:37:34.276447 kubelet[2733]: I0909 00:37:34.276338 2733 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5247415-ec51-4af2-9c8d-7db9f264eca7" path="/var/lib/kubelet/pods/d5247415-ec51-4af2-9c8d-7db9f264eca7/volumes" Sep 9 00:37:34.283768 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:37:34.308903 containerd[1566]: time="2025-09-09T00:37:34.308798863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-645556b648-5gjs6,Uid:e92bf4e5-5efe-42d1-9794-23cab49eda09,Namespace:calico-system,Attempt:0,}" Sep 9 00:37:34.383516 containerd[1566]: time="2025-09-09T00:37:34.383464628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nktsk,Uid:dffc82f8-bf4b-4a16-a1a2-4a73ee928eb9,Namespace:calico-system,Attempt:0,} returns sandbox id \"24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4\"" Sep 9 00:37:34.389239 containerd[1566]: time="2025-09-09T00:37:34.388263964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 00:37:34.569837 systemd-networkd[1488]: cali33c787db901: Link UP Sep 9 00:37:34.573644 systemd-networkd[1488]: cali33c787db901: Gained carrier Sep 9 00:37:34.646591 containerd[1566]: 2025-09-09 00:37:34.342 [INFO][4092] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:37:34.646591 containerd[1566]: 2025-09-09 00:37:34.360 [INFO][4092] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6fc7d657c6--nwxlc-eth0 calico-kube-controllers-6fc7d657c6- calico-system ca1b3cef-013c-458e-9dbd-c26a121d5707 874 0 2025-09-09 00:37:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6fc7d657c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6fc7d657c6-nwxlc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali33c787db901 [] [] }} ContainerID="1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769" Namespace="calico-system" Pod="calico-kube-controllers-6fc7d657c6-nwxlc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fc7d657c6--nwxlc-" Sep 9 00:37:34.646591 containerd[1566]: 2025-09-09 00:37:34.360 [INFO][4092] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769" Namespace="calico-system" Pod="calico-kube-controllers-6fc7d657c6-nwxlc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fc7d657c6--nwxlc-eth0" Sep 9 00:37:34.646591 containerd[1566]: 2025-09-09 00:37:34.468 [INFO][4162] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769" HandleID="k8s-pod-network.1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769" Workload="localhost-k8s-calico--kube--controllers--6fc7d657c6--nwxlc-eth0" Sep 9 00:37:34.646943 containerd[1566]: 2025-09-09 00:37:34.469 [INFO][4162] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769" HandleID="k8s-pod-network.1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769" Workload="localhost-k8s-calico--kube--controllers--6fc7d657c6--nwxlc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135380), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6fc7d657c6-nwxlc", "timestamp":"2025-09-09 00:37:34.467865768 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:37:34.646943 containerd[1566]: 2025-09-09 00:37:34.469 [INFO][4162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:34.646943 containerd[1566]: 2025-09-09 00:37:34.469 [INFO][4162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:34.646943 containerd[1566]: 2025-09-09 00:37:34.470 [INFO][4162] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:37:34.646943 containerd[1566]: 2025-09-09 00:37:34.485 [INFO][4162] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769" host="localhost" Sep 9 00:37:34.646943 containerd[1566]: 2025-09-09 00:37:34.496 [INFO][4162] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:37:34.646943 containerd[1566]: 2025-09-09 00:37:34.514 [INFO][4162] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:37:34.646943 containerd[1566]: 2025-09-09 00:37:34.519 [INFO][4162] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:34.646943 containerd[1566]: 2025-09-09 00:37:34.528 [INFO][4162] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:34.646943 containerd[1566]: 2025-09-09 00:37:34.529 [INFO][4162] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769" host="localhost" Sep 9 00:37:34.647306 containerd[1566]: 2025-09-09 00:37:34.534 [INFO][4162] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769 Sep 9 00:37:34.647306 containerd[1566]: 2025-09-09 00:37:34.540 [INFO][4162] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769" host="localhost" Sep 9 00:37:34.647306 containerd[1566]: 2025-09-09 00:37:34.549 [INFO][4162] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769" host="localhost" Sep 9 00:37:34.647306 containerd[1566]: 2025-09-09 00:37:34.549 [INFO][4162] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769" host="localhost" Sep 9 00:37:34.647306 containerd[1566]: 2025-09-09 00:37:34.549 [INFO][4162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:34.647306 containerd[1566]: 2025-09-09 00:37:34.549 [INFO][4162] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769" HandleID="k8s-pod-network.1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769" Workload="localhost-k8s-calico--kube--controllers--6fc7d657c6--nwxlc-eth0" Sep 9 00:37:34.647494 containerd[1566]: 2025-09-09 00:37:34.564 [INFO][4092] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769" Namespace="calico-system" Pod="calico-kube-controllers-6fc7d657c6-nwxlc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fc7d657c6--nwxlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6fc7d657c6--nwxlc-eth0", GenerateName:"calico-kube-controllers-6fc7d657c6-", Namespace:"calico-system", SelfLink:"", UID:"ca1b3cef-013c-458e-9dbd-c26a121d5707", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 37, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fc7d657c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6fc7d657c6-nwxlc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali33c787db901", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:34.647576 containerd[1566]: 2025-09-09 00:37:34.564 [INFO][4092] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769" Namespace="calico-system" Pod="calico-kube-controllers-6fc7d657c6-nwxlc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fc7d657c6--nwxlc-eth0" Sep 9 00:37:34.647576 containerd[1566]: 2025-09-09 00:37:34.565 [INFO][4092] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33c787db901 ContainerID="1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769" Namespace="calico-system" Pod="calico-kube-controllers-6fc7d657c6-nwxlc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fc7d657c6--nwxlc-eth0" Sep 9 00:37:34.647576 containerd[1566]: 2025-09-09 00:37:34.575 [INFO][4092] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769" Namespace="calico-system" Pod="calico-kube-controllers-6fc7d657c6-nwxlc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fc7d657c6--nwxlc-eth0" Sep 9 00:37:34.647682 containerd[1566]: 2025-09-09 00:37:34.576 [INFO][4092] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769" Namespace="calico-system" Pod="calico-kube-controllers-6fc7d657c6-nwxlc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fc7d657c6--nwxlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6fc7d657c6--nwxlc-eth0", GenerateName:"calico-kube-controllers-6fc7d657c6-", Namespace:"calico-system", SelfLink:"", UID:"ca1b3cef-013c-458e-9dbd-c26a121d5707", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 37, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fc7d657c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769", Pod:"calico-kube-controllers-6fc7d657c6-nwxlc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali33c787db901", MAC:"66:41:63:78:5c:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:34.647755 containerd[1566]: 2025-09-09 00:37:34.643 [INFO][4092] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769" Namespace="calico-system" Pod="calico-kube-controllers-6fc7d657c6-nwxlc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fc7d657c6--nwxlc-eth0" Sep 9 00:37:34.650050 containerd[1566]: time="2025-09-09T00:37:34.649957445Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a581755cfc917a1c026b2467e5b670fefda39565f145b0a6f3643732e34737a3\" id:\"a9759001f4c483763526d2c099d855d870c1f76b5b2ffcdfa2d665d253ed0e69\" pid:4210 exit_status:1 exited_at:{seconds:1757378254 nanos:649583546}" Sep 9 00:37:35.120123 systemd-networkd[1488]: cali57056db4a0d: Link UP Sep 9 00:37:35.121121 systemd-networkd[1488]: cali57056db4a0d: Gained carrier Sep 9 00:37:35.155105 systemd-networkd[1488]: cali2f1a0ee69af: Gained IPv6LL Sep 9 00:37:35.181589 containerd[1566]: 2025-09-09 00:37:34.417 [INFO][4167] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:37:35.181589 containerd[1566]: 2025-09-09 00:37:34.447 [INFO][4167] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--645556b648--5gjs6-eth0 whisker-645556b648- calico-system e92bf4e5-5efe-42d1-9794-23cab49eda09 1012 0 2025-09-09 00:37:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:645556b648 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-645556b648-5gjs6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali57056db4a0d [] [] }} ContainerID="a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94" Namespace="calico-system" Pod="whisker-645556b648-5gjs6" WorkloadEndpoint="localhost-k8s-whisker--645556b648--5gjs6-" Sep 9 00:37:35.181589 containerd[1566]: 2025-09-09 00:37:34.447 [INFO][4167] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94" Namespace="calico-system" Pod="whisker-645556b648-5gjs6" WorkloadEndpoint="localhost-k8s-whisker--645556b648--5gjs6-eth0" Sep 9 00:37:35.181589 containerd[1566]: 2025-09-09 00:37:34.540 [INFO][4190] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94" HandleID="k8s-pod-network.a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94" Workload="localhost-k8s-whisker--645556b648--5gjs6-eth0" Sep 9 00:37:35.181957 containerd[1566]: 2025-09-09 00:37:34.543 [INFO][4190] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94" HandleID="k8s-pod-network.a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94" Workload="localhost-k8s-whisker--645556b648--5gjs6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7100), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-645556b648-5gjs6", "timestamp":"2025-09-09 00:37:34.54050619 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:37:35.181957 containerd[1566]: 2025-09-09 00:37:34.544 [INFO][4190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:35.181957 containerd[1566]: 2025-09-09 00:37:34.549 [INFO][4190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:35.181957 containerd[1566]: 2025-09-09 00:37:34.550 [INFO][4190] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:37:35.181957 containerd[1566]: 2025-09-09 00:37:34.586 [INFO][4190] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94" host="localhost" Sep 9 00:37:35.181957 containerd[1566]: 2025-09-09 00:37:34.761 [INFO][4190] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:37:35.181957 containerd[1566]: 2025-09-09 00:37:34.767 [INFO][4190] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:37:35.181957 containerd[1566]: 2025-09-09 00:37:34.769 [INFO][4190] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:35.181957 containerd[1566]: 2025-09-09 00:37:34.771 [INFO][4190] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:35.181957 containerd[1566]: 2025-09-09 00:37:34.772 [INFO][4190] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94" host="localhost" Sep 9 00:37:35.182300 containerd[1566]: 2025-09-09 00:37:34.774 [INFO][4190] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94 Sep 9 00:37:35.182300 containerd[1566]: 2025-09-09 00:37:34.816 [INFO][4190] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94" host="localhost" Sep 9 00:37:35.182300 containerd[1566]: 2025-09-09 00:37:35.107 [INFO][4190] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94" host="localhost" Sep 9 00:37:35.182300 containerd[1566]: 2025-09-09 00:37:35.107 [INFO][4190] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94" host="localhost" Sep 9 00:37:35.182300 containerd[1566]: 2025-09-09 00:37:35.107 [INFO][4190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:35.182300 containerd[1566]: 2025-09-09 00:37:35.107 [INFO][4190] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94" HandleID="k8s-pod-network.a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94" Workload="localhost-k8s-whisker--645556b648--5gjs6-eth0" Sep 9 00:37:35.182455 containerd[1566]: 2025-09-09 00:37:35.114 [INFO][4167] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94" Namespace="calico-system" Pod="whisker-645556b648-5gjs6" WorkloadEndpoint="localhost-k8s-whisker--645556b648--5gjs6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--645556b648--5gjs6-eth0", GenerateName:"whisker-645556b648-", Namespace:"calico-system", SelfLink:"", UID:"e92bf4e5-5efe-42d1-9794-23cab49eda09", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"645556b648", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-645556b648-5gjs6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali57056db4a0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:35.182455 containerd[1566]: 2025-09-09 00:37:35.114 [INFO][4167] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94" Namespace="calico-system" Pod="whisker-645556b648-5gjs6" WorkloadEndpoint="localhost-k8s-whisker--645556b648--5gjs6-eth0" Sep 9 00:37:35.182531 containerd[1566]: 2025-09-09 00:37:35.114 [INFO][4167] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57056db4a0d ContainerID="a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94" Namespace="calico-system" Pod="whisker-645556b648-5gjs6" WorkloadEndpoint="localhost-k8s-whisker--645556b648--5gjs6-eth0" Sep 9 00:37:35.182531 containerd[1566]: 2025-09-09 00:37:35.121 [INFO][4167] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94" Namespace="calico-system" Pod="whisker-645556b648-5gjs6" WorkloadEndpoint="localhost-k8s-whisker--645556b648--5gjs6-eth0" Sep 9 00:37:35.182579 containerd[1566]: 2025-09-09 00:37:35.122 [INFO][4167] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94" Namespace="calico-system" Pod="whisker-645556b648-5gjs6" WorkloadEndpoint="localhost-k8s-whisker--645556b648--5gjs6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--645556b648--5gjs6-eth0", GenerateName:"whisker-645556b648-", Namespace:"calico-system", SelfLink:"", UID:"e92bf4e5-5efe-42d1-9794-23cab49eda09", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"645556b648", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94", Pod:"whisker-645556b648-5gjs6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali57056db4a0d", MAC:"8a:bc:2f:6c:27:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:35.182629 containerd[1566]: 2025-09-09 00:37:35.176 [INFO][4167] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94" Namespace="calico-system" Pod="whisker-645556b648-5gjs6" WorkloadEndpoint="localhost-k8s-whisker--645556b648--5gjs6-eth0" Sep 9 00:37:35.532395 containerd[1566]: time="2025-09-09T00:37:35.532297638Z" level=info msg="connecting to shim 1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769" address="unix:///run/containerd/s/8cc693100fe30ed789ab756873a2210276198c86d176d5b5ff1e4923175d95e1" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:37:35.550330 containerd[1566]: time="2025-09-09T00:37:35.550268713Z" level=info msg="connecting to shim a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94" address="unix:///run/containerd/s/304e5cbd85d48ffcd0197f22b27ab5898ac30a231988d47f1984773624862413" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:37:35.585200 systemd[1]: Started cri-containerd-1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769.scope - libcontainer container 1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769. Sep 9 00:37:35.604190 systemd[1]: Started cri-containerd-a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94.scope - libcontainer container a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94. Sep 9 00:37:35.612777 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:37:35.632542 systemd-networkd[1488]: vxlan.calico: Link UP Sep 9 00:37:35.632552 systemd-networkd[1488]: vxlan.calico: Gained carrier Sep 9 00:37:35.636149 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:37:35.670285 containerd[1566]: time="2025-09-09T00:37:35.669973361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fc7d657c6-nwxlc,Uid:ca1b3cef-013c-458e-9dbd-c26a121d5707,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769\"" Sep 9 00:37:35.693313 containerd[1566]: time="2025-09-09T00:37:35.693253625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-645556b648-5gjs6,Uid:e92bf4e5-5efe-42d1-9794-23cab49eda09,Namespace:calico-system,Attempt:0,} returns sandbox id \"a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94\"" Sep 9 00:37:36.052168 systemd-networkd[1488]: cali33c787db901: Gained IPv6LL Sep 9 00:37:36.627196 systemd-networkd[1488]: cali57056db4a0d: Gained IPv6LL Sep 9 00:37:37.139141 systemd-networkd[1488]: vxlan.calico: Gained IPv6LL Sep 9 00:37:37.628456 systemd[1]: Started sshd@8-10.0.0.5:22-10.0.0.1:36880.service - OpenSSH per-connection server daemon (10.0.0.1:36880). Sep 9 00:37:37.713281 sshd[4435]: Accepted publickey for core from 10.0.0.1 port 36880 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:37:37.715012 sshd-session[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:37.719836 systemd-logind[1550]: New session 9 of user core. Sep 9 00:37:37.729007 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:37:37.854678 sshd[4438]: Connection closed by 10.0.0.1 port 36880 Sep 9 00:37:37.855046 sshd-session[4435]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:37.859685 systemd[1]: sshd@8-10.0.0.5:22-10.0.0.1:36880.service: Deactivated successfully. Sep 9 00:37:37.861790 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:37:37.862622 systemd-logind[1550]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:37:37.863983 systemd-logind[1550]: Removed session 9. Sep 9 00:37:38.490903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2687263612.mount: Deactivated successfully. Sep 9 00:37:41.257550 containerd[1566]: time="2025-09-09T00:37:41.257471155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:41.291783 containerd[1566]: time="2025-09-09T00:37:41.291704182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 9 00:37:41.336062 containerd[1566]: time="2025-09-09T00:37:41.335992430Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:41.363577 containerd[1566]: time="2025-09-09T00:37:41.363460025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:41.364755 containerd[1566]: time="2025-09-09T00:37:41.364697725Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 6.975466381s" Sep 9 00:37:41.364755 containerd[1566]: time="2025-09-09T00:37:41.364747691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 9 00:37:41.374210 containerd[1566]: time="2025-09-09T00:37:41.374146103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 00:37:41.378721 containerd[1566]: time="2025-09-09T00:37:41.378683480Z" level=info msg="CreateContainer within sandbox \"24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 00:37:41.691772 containerd[1566]: time="2025-09-09T00:37:41.691644267Z" level=info msg="Container 27ffa10090c0640cf8917238b91e3029b70483219975e3530fc4d4ee8a2f6804: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:37:41.709347 containerd[1566]: time="2025-09-09T00:37:41.709292764Z" level=info msg="CreateContainer within sandbox \"24ea6b8fa6c3ccb79032b4ad52e2a950f7369f94ac50ab36dc24736fa84eced4\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"27ffa10090c0640cf8917238b91e3029b70483219975e3530fc4d4ee8a2f6804\"" Sep 9 00:37:41.710091 containerd[1566]: time="2025-09-09T00:37:41.710051356Z" level=info msg="StartContainer for \"27ffa10090c0640cf8917238b91e3029b70483219975e3530fc4d4ee8a2f6804\"" Sep 9 00:37:41.712011 containerd[1566]: time="2025-09-09T00:37:41.711972284Z" level=info msg="connecting to shim 27ffa10090c0640cf8917238b91e3029b70483219975e3530fc4d4ee8a2f6804" address="unix:///run/containerd/s/cab4add1ae2b315445d59ab39c8e2fd88da93357819da9291ede4c8dd2f79068" protocol=ttrpc version=3 Sep 9 00:37:41.734101 systemd[1]: Started cri-containerd-27ffa10090c0640cf8917238b91e3029b70483219975e3530fc4d4ee8a2f6804.scope - libcontainer container 27ffa10090c0640cf8917238b91e3029b70483219975e3530fc4d4ee8a2f6804. Sep 9 00:37:41.967626 containerd[1566]: time="2025-09-09T00:37:41.967106605Z" level=info msg="StartContainer for \"27ffa10090c0640cf8917238b91e3029b70483219975e3530fc4d4ee8a2f6804\" returns successfully" Sep 9 00:37:42.266642 containerd[1566]: time="2025-09-09T00:37:42.266195674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df6bdd7ff-ldv9n,Uid:5eabb735-48c0-4991-a154-a56147deb87c,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:37:42.606725 kubelet[2733]: I0909 00:37:42.605853 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-nktsk" podStartSLOduration=29.618374962 podStartE2EDuration="36.6058311s" podCreationTimestamp="2025-09-09 00:37:06 +0000 UTC" firstStartedPulling="2025-09-09 00:37:34.386436791 +0000 UTC m=+46.217318993" lastFinishedPulling="2025-09-09 00:37:41.373892929 +0000 UTC m=+53.204775131" observedRunningTime="2025-09-09 00:37:42.602647339 +0000 UTC m=+54.433529561" watchObservedRunningTime="2025-09-09 00:37:42.6058311 +0000 UTC m=+54.436713302" Sep 9 00:37:42.785160 systemd-networkd[1488]: calib9ea7899b76: Link UP Sep 9 00:37:42.786460 systemd-networkd[1488]: calib9ea7899b76: Gained carrier Sep 9 00:37:42.807463 containerd[1566]: 2025-09-09 00:37:42.678 [INFO][4511] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7df6bdd7ff--ldv9n-eth0 calico-apiserver-7df6bdd7ff- calico-apiserver 5eabb735-48c0-4991-a154-a56147deb87c 880 0 2025-09-09 00:37:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7df6bdd7ff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7df6bdd7ff-ldv9n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib9ea7899b76 [] [] }} ContainerID="d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593" Namespace="calico-apiserver" Pod="calico-apiserver-7df6bdd7ff-ldv9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df6bdd7ff--ldv9n-" Sep 9 00:37:42.807463 containerd[1566]: 2025-09-09 00:37:42.678 [INFO][4511] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593" Namespace="calico-apiserver" Pod="calico-apiserver-7df6bdd7ff-ldv9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df6bdd7ff--ldv9n-eth0" Sep 9 00:37:42.807463 containerd[1566]: 2025-09-09 00:37:42.706 [INFO][4521] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593" HandleID="k8s-pod-network.d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593" Workload="localhost-k8s-calico--apiserver--7df6bdd7ff--ldv9n-eth0" Sep 9 00:37:42.807754 containerd[1566]: 2025-09-09 00:37:42.706 [INFO][4521] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593" HandleID="k8s-pod-network.d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593" Workload="localhost-k8s-calico--apiserver--7df6bdd7ff--ldv9n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7df6bdd7ff-ldv9n", "timestamp":"2025-09-09 00:37:42.706428289 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:37:42.807754 containerd[1566]: 2025-09-09 00:37:42.706 [INFO][4521] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:42.807754 containerd[1566]: 2025-09-09 00:37:42.706 [INFO][4521] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:42.807754 containerd[1566]: 2025-09-09 00:37:42.706 [INFO][4521] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:37:42.807754 containerd[1566]: 2025-09-09 00:37:42.745 [INFO][4521] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593" host="localhost" Sep 9 00:37:42.807754 containerd[1566]: 2025-09-09 00:37:42.754 [INFO][4521] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:37:42.807754 containerd[1566]: 2025-09-09 00:37:42.759 [INFO][4521] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:37:42.807754 containerd[1566]: 2025-09-09 00:37:42.761 [INFO][4521] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:42.807754 containerd[1566]: 2025-09-09 00:37:42.763 [INFO][4521] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:42.807754 containerd[1566]: 2025-09-09 00:37:42.763 [INFO][4521] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593" host="localhost" Sep 9 00:37:42.809004 containerd[1566]: 2025-09-09 00:37:42.764 [INFO][4521] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593 Sep 9 00:37:42.809004 containerd[1566]: 2025-09-09 00:37:42.769 [INFO][4521] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593" host="localhost" Sep 9 00:37:42.809004 containerd[1566]: 2025-09-09 00:37:42.777 [INFO][4521] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593" host="localhost" Sep 9 00:37:42.809004 containerd[1566]: 2025-09-09 00:37:42.777 [INFO][4521] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593" host="localhost" Sep 9 00:37:42.809004 containerd[1566]: 2025-09-09 00:37:42.777 [INFO][4521] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:42.809004 containerd[1566]: 2025-09-09 00:37:42.777 [INFO][4521] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593" HandleID="k8s-pod-network.d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593" Workload="localhost-k8s-calico--apiserver--7df6bdd7ff--ldv9n-eth0" Sep 9 00:37:42.809157 containerd[1566]: 2025-09-09 00:37:42.781 [INFO][4511] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593" Namespace="calico-apiserver" Pod="calico-apiserver-7df6bdd7ff-ldv9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df6bdd7ff--ldv9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7df6bdd7ff--ldv9n-eth0", GenerateName:"calico-apiserver-7df6bdd7ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"5eabb735-48c0-4991-a154-a56147deb87c", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 37, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7df6bdd7ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7df6bdd7ff-ldv9n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9ea7899b76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:42.809253 containerd[1566]: 2025-09-09 00:37:42.782 [INFO][4511] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593" Namespace="calico-apiserver" Pod="calico-apiserver-7df6bdd7ff-ldv9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df6bdd7ff--ldv9n-eth0" Sep 9 00:37:42.809253 containerd[1566]: 2025-09-09 00:37:42.782 [INFO][4511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9ea7899b76 ContainerID="d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593" Namespace="calico-apiserver" Pod="calico-apiserver-7df6bdd7ff-ldv9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df6bdd7ff--ldv9n-eth0" Sep 9 00:37:42.809253 containerd[1566]: 2025-09-09 00:37:42.788 [INFO][4511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593" Namespace="calico-apiserver" Pod="calico-apiserver-7df6bdd7ff-ldv9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df6bdd7ff--ldv9n-eth0" Sep 9 00:37:42.809326 containerd[1566]: 2025-09-09 00:37:42.790 [INFO][4511] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593" Namespace="calico-apiserver" Pod="calico-apiserver-7df6bdd7ff-ldv9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df6bdd7ff--ldv9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7df6bdd7ff--ldv9n-eth0", GenerateName:"calico-apiserver-7df6bdd7ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"5eabb735-48c0-4991-a154-a56147deb87c", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 37, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7df6bdd7ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593", Pod:"calico-apiserver-7df6bdd7ff-ldv9n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9ea7899b76", MAC:"da:d8:69:7e:4d:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:42.809381 containerd[1566]: 2025-09-09 00:37:42.799 [INFO][4511] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593" Namespace="calico-apiserver" Pod="calico-apiserver-7df6bdd7ff-ldv9n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df6bdd7ff--ldv9n-eth0" Sep 9 00:37:42.837268 containerd[1566]: time="2025-09-09T00:37:42.837138917Z" level=info msg="connecting to shim d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593" address="unix:///run/containerd/s/edc826c165d05739a816c281a79ccfe3ac701bb679dc0650bc7678d83a922dda" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:37:42.880080 systemd[1]: Started cri-containerd-d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593.scope - libcontainer container d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593. Sep 9 00:37:42.882865 systemd[1]: Started sshd@9-10.0.0.5:22-10.0.0.1:41158.service - OpenSSH per-connection server daemon (10.0.0.1:41158). Sep 9 00:37:42.907962 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:37:42.944136 sshd[4574]: Accepted publickey for core from 10.0.0.1 port 41158 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:37:42.945704 sshd-session[4574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:42.946420 containerd[1566]: time="2025-09-09T00:37:42.946383502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df6bdd7ff-ldv9n,Uid:5eabb735-48c0-4991-a154-a56147deb87c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593\"" Sep 9 00:37:42.951719 systemd-logind[1550]: New session 10 of user core. Sep 9 00:37:42.960026 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:37:43.199451 sshd[4589]: Connection closed by 10.0.0.1 port 41158 Sep 9 00:37:43.199717 sshd-session[4574]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:43.204982 systemd[1]: sshd@9-10.0.0.5:22-10.0.0.1:41158.service: Deactivated successfully. Sep 9 00:37:43.207388 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:37:43.208625 systemd-logind[1550]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:37:43.210383 systemd-logind[1550]: Removed session 10. Sep 9 00:37:43.265385 kubelet[2733]: E0909 00:37:43.265340 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:43.266131 containerd[1566]: time="2025-09-09T00:37:43.266094608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df6bdd7ff-rmz9h,Uid:0cd1dd39-fe2c-45f4-8309-3b93ea396e71,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:37:43.266307 containerd[1566]: time="2025-09-09T00:37:43.266167848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p62qs,Uid:05b853f1-12c7-471b-853c-d97bde5dec17,Namespace:kube-system,Attempt:0,}" Sep 9 00:37:43.287990 containerd[1566]: time="2025-09-09T00:37:43.287849976Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27ffa10090c0640cf8917238b91e3029b70483219975e3530fc4d4ee8a2f6804\" id:\"5299d277e98e82f7ad936785d46b438da10788c4ce646defaf1fba0e0dd4852d\" pid:4614 exit_status:1 exited_at:{seconds:1757378263 nanos:287407940}" Sep 9 00:37:43.369153 containerd[1566]: time="2025-09-09T00:37:43.369031735Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27ffa10090c0640cf8917238b91e3029b70483219975e3530fc4d4ee8a2f6804\" id:\"721c5ee274b0781a0ea1fcd4108067f4b77dc19f76bf28b26d79b48276170b1e\" pid:4638 exit_status:1 exited_at:{seconds:1757378263 nanos:368660765}" Sep 9 00:37:43.575279 containerd[1566]: time="2025-09-09T00:37:43.575222402Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27ffa10090c0640cf8917238b91e3029b70483219975e3530fc4d4ee8a2f6804\" id:\"f74c5554b4eed35d719896bb2ac30fe737bacc3fb553ff8e04e491f59ff1c387\" pid:4662 exit_status:1 exited_at:{seconds:1757378263 nanos:574665487}" Sep 9 00:37:43.843194 systemd-networkd[1488]: calia3cef91e060: Link UP Sep 9 00:37:43.846416 systemd-networkd[1488]: calia3cef91e060: Gained carrier Sep 9 00:37:43.864623 containerd[1566]: 2025-09-09 00:37:43.622 [INFO][4674] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7df6bdd7ff--rmz9h-eth0 calico-apiserver-7df6bdd7ff- calico-apiserver 0cd1dd39-fe2c-45f4-8309-3b93ea396e71 876 0 2025-09-09 00:37:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7df6bdd7ff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7df6bdd7ff-rmz9h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia3cef91e060 [] [] }} ContainerID="cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2" Namespace="calico-apiserver" Pod="calico-apiserver-7df6bdd7ff-rmz9h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df6bdd7ff--rmz9h-" Sep 9 00:37:43.864623 containerd[1566]: 2025-09-09 00:37:43.623 [INFO][4674] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2" Namespace="calico-apiserver" Pod="calico-apiserver-7df6bdd7ff-rmz9h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df6bdd7ff--rmz9h-eth0" Sep 9 00:37:43.864623 containerd[1566]: 2025-09-09 00:37:43.646 [INFO][4704] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2" HandleID="k8s-pod-network.cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2" Workload="localhost-k8s-calico--apiserver--7df6bdd7ff--rmz9h-eth0" Sep 9 00:37:43.864951 containerd[1566]: 2025-09-09 00:37:43.646 [INFO][4704] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2" HandleID="k8s-pod-network.cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2" Workload="localhost-k8s-calico--apiserver--7df6bdd7ff--rmz9h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7df6bdd7ff-rmz9h", "timestamp":"2025-09-09 00:37:43.646659796 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:37:43.864951 containerd[1566]: 2025-09-09 00:37:43.646 [INFO][4704] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:43.864951 containerd[1566]: 2025-09-09 00:37:43.647 [INFO][4704] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:43.864951 containerd[1566]: 2025-09-09 00:37:43.647 [INFO][4704] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:37:43.864951 containerd[1566]: 2025-09-09 00:37:43.662 [INFO][4704] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2" host="localhost" Sep 9 00:37:43.864951 containerd[1566]: 2025-09-09 00:37:43.673 [INFO][4704] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:37:43.864951 containerd[1566]: 2025-09-09 00:37:43.680 [INFO][4704] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:37:43.864951 containerd[1566]: 2025-09-09 00:37:43.683 [INFO][4704] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:43.864951 containerd[1566]: 2025-09-09 00:37:43.686 [INFO][4704] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:43.864951 containerd[1566]: 2025-09-09 00:37:43.686 [INFO][4704] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2" host="localhost" Sep 9 00:37:43.865204 containerd[1566]: 2025-09-09 00:37:43.693 [INFO][4704] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2 Sep 9 00:37:43.865204 containerd[1566]: 2025-09-09 00:37:43.734 [INFO][4704] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2" host="localhost" Sep 9 00:37:43.865204 containerd[1566]: 2025-09-09 00:37:43.819 [INFO][4704] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2" host="localhost" Sep 9 00:37:43.865204 containerd[1566]: 2025-09-09 00:37:43.819 [INFO][4704] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2" host="localhost" Sep 9 00:37:43.865204 containerd[1566]: 2025-09-09 00:37:43.820 [INFO][4704] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:43.865204 containerd[1566]: 2025-09-09 00:37:43.820 [INFO][4704] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2" HandleID="k8s-pod-network.cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2" Workload="localhost-k8s-calico--apiserver--7df6bdd7ff--rmz9h-eth0" Sep 9 00:37:43.865332 containerd[1566]: 2025-09-09 00:37:43.831 [INFO][4674] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2" Namespace="calico-apiserver" Pod="calico-apiserver-7df6bdd7ff-rmz9h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df6bdd7ff--rmz9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7df6bdd7ff--rmz9h-eth0", GenerateName:"calico-apiserver-7df6bdd7ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"0cd1dd39-fe2c-45f4-8309-3b93ea396e71", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 37, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7df6bdd7ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7df6bdd7ff-rmz9h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3cef91e060", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:43.865389 containerd[1566]: 2025-09-09 00:37:43.831 [INFO][4674] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2" Namespace="calico-apiserver" Pod="calico-apiserver-7df6bdd7ff-rmz9h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df6bdd7ff--rmz9h-eth0" Sep 9 00:37:43.865389 containerd[1566]: 2025-09-09 00:37:43.831 [INFO][4674] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3cef91e060 ContainerID="cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2" Namespace="calico-apiserver" Pod="calico-apiserver-7df6bdd7ff-rmz9h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df6bdd7ff--rmz9h-eth0" Sep 9 00:37:43.865389 containerd[1566]: 2025-09-09 00:37:43.848 [INFO][4674] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2" Namespace="calico-apiserver" Pod="calico-apiserver-7df6bdd7ff-rmz9h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df6bdd7ff--rmz9h-eth0" Sep 9 00:37:43.865462 containerd[1566]: 2025-09-09 00:37:43.849 [INFO][4674] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2" Namespace="calico-apiserver" Pod="calico-apiserver-7df6bdd7ff-rmz9h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df6bdd7ff--rmz9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7df6bdd7ff--rmz9h-eth0", GenerateName:"calico-apiserver-7df6bdd7ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"0cd1dd39-fe2c-45f4-8309-3b93ea396e71", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 37, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7df6bdd7ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2", Pod:"calico-apiserver-7df6bdd7ff-rmz9h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3cef91e060", MAC:"3e:ff:96:74:36:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:43.865521 containerd[1566]: 2025-09-09 00:37:43.860 [INFO][4674] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2" Namespace="calico-apiserver" Pod="calico-apiserver-7df6bdd7ff-rmz9h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df6bdd7ff--rmz9h-eth0" Sep 9 00:37:43.882788 systemd-networkd[1488]: cali2f5bf3c1053: Link UP Sep 9 00:37:43.883681 systemd-networkd[1488]: cali2f5bf3c1053: Gained carrier Sep 9 00:37:43.969378 containerd[1566]: 2025-09-09 00:37:43.624 [INFO][4686] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--p62qs-eth0 coredns-7c65d6cfc9- kube-system 05b853f1-12c7-471b-853c-d97bde5dec17 867 0 2025-09-09 00:36:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-p62qs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2f5bf3c1053 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p62qs" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p62qs-" Sep 9 00:37:43.969378 containerd[1566]: 2025-09-09 00:37:43.624 [INFO][4686] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p62qs" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p62qs-eth0" Sep 9 00:37:43.969378 containerd[1566]: 2025-09-09 00:37:43.654 [INFO][4711] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307" HandleID="k8s-pod-network.6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307" Workload="localhost-k8s-coredns--7c65d6cfc9--p62qs-eth0" Sep 9 00:37:43.969668 containerd[1566]: 2025-09-09 00:37:43.654 [INFO][4711] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307" HandleID="k8s-pod-network.6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307" Workload="localhost-k8s-coredns--7c65d6cfc9--p62qs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139420), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-p62qs", "timestamp":"2025-09-09 00:37:43.654504017 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:37:43.969668 containerd[1566]: 2025-09-09 00:37:43.655 [INFO][4711] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:43.969668 containerd[1566]: 2025-09-09 00:37:43.820 [INFO][4711] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:43.969668 containerd[1566]: 2025-09-09 00:37:43.820 [INFO][4711] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:37:43.969668 containerd[1566]: 2025-09-09 00:37:43.827 [INFO][4711] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307" host="localhost" Sep 9 00:37:43.969668 containerd[1566]: 2025-09-09 00:37:43.833 [INFO][4711] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:37:43.969668 containerd[1566]: 2025-09-09 00:37:43.841 [INFO][4711] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:37:43.969668 containerd[1566]: 2025-09-09 00:37:43.848 [INFO][4711] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:43.969668 containerd[1566]: 2025-09-09 00:37:43.854 [INFO][4711] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:43.969668 containerd[1566]: 2025-09-09 00:37:43.854 [INFO][4711] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307" host="localhost" Sep 9 00:37:43.969980 containerd[1566]: 2025-09-09 00:37:43.856 [INFO][4711] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307 Sep 9 00:37:43.969980 containerd[1566]: 2025-09-09 00:37:43.864 [INFO][4711] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307" host="localhost" Sep 9 00:37:43.969980 containerd[1566]: 2025-09-09 00:37:43.876 [INFO][4711] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307" host="localhost" Sep 9 00:37:43.969980 containerd[1566]: 2025-09-09 00:37:43.876 [INFO][4711] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307" host="localhost" Sep 9 00:37:43.969980 containerd[1566]: 2025-09-09 00:37:43.876 [INFO][4711] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:43.969980 containerd[1566]: 2025-09-09 00:37:43.876 [INFO][4711] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307" HandleID="k8s-pod-network.6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307" Workload="localhost-k8s-coredns--7c65d6cfc9--p62qs-eth0" Sep 9 00:37:43.970121 containerd[1566]: 2025-09-09 00:37:43.879 [INFO][4686] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p62qs" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p62qs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--p62qs-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"05b853f1-12c7-471b-853c-d97bde5dec17", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-p62qs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f5bf3c1053", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:43.970207 containerd[1566]: 2025-09-09 00:37:43.879 [INFO][4686] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p62qs" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p62qs-eth0" Sep 9 00:37:43.970207 containerd[1566]: 2025-09-09 00:37:43.879 [INFO][4686] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f5bf3c1053 ContainerID="6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p62qs" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p62qs-eth0" Sep 9 00:37:43.970207 containerd[1566]: 2025-09-09 00:37:43.884 [INFO][4686] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p62qs" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p62qs-eth0" Sep 9 00:37:43.970348 containerd[1566]: 2025-09-09 00:37:43.885 [INFO][4686] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p62qs" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p62qs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--p62qs-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"05b853f1-12c7-471b-853c-d97bde5dec17", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307", Pod:"coredns-7c65d6cfc9-p62qs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f5bf3c1053", MAC:"c2:ad:8a:d5:68:75", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:43.970348 containerd[1566]: 2025-09-09 00:37:43.965 [INFO][4686] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p62qs" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p62qs-eth0" Sep 9 00:37:43.987337 systemd-networkd[1488]: calib9ea7899b76: Gained IPv6LL Sep 9 00:37:44.163309 containerd[1566]: time="2025-09-09T00:37:44.162976315Z" level=info msg="connecting to shim cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2" address="unix:///run/containerd/s/24bfb9d3db4f00e218e5db1f6c37c6717ffff918e7564f0e96a5e1553b62dd7a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:37:44.200087 containerd[1566]: time="2025-09-09T00:37:44.200040776Z" level=info msg="connecting to shim 6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307" address="unix:///run/containerd/s/c305f2155cde64698482434ae1ec0dd193ba528447c46adebd4c93d373621b84" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:37:44.205169 systemd[1]: Started cri-containerd-cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2.scope - libcontainer container cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2. Sep 9 00:37:44.232866 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:37:44.247277 systemd[1]: Started cri-containerd-6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307.scope - libcontainer container 6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307. Sep 9 00:37:44.266644 kubelet[2733]: E0909 00:37:44.266607 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:44.267085 containerd[1566]: time="2025-09-09T00:37:44.266965476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df6bdd7ff-rmz9h,Uid:0cd1dd39-fe2c-45f4-8309-3b93ea396e71,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2\"" Sep 9 00:37:44.267455 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:37:44.269131 containerd[1566]: time="2025-09-09T00:37:44.268271993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bfkfk,Uid:bfb4a09d-bdfb-4efb-a83c-d7bb472ba089,Namespace:kube-system,Attempt:0,}" Sep 9 00:37:44.306905 containerd[1566]: time="2025-09-09T00:37:44.306834277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p62qs,Uid:05b853f1-12c7-471b-853c-d97bde5dec17,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307\"" Sep 9 00:37:44.308110 kubelet[2733]: E0909 00:37:44.307741 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:44.312222 containerd[1566]: time="2025-09-09T00:37:44.312173464Z" level=info msg="CreateContainer within sandbox \"6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:37:44.325792 containerd[1566]: time="2025-09-09T00:37:44.325749966Z" level=info msg="Container ad82434bc7b60e2f0b802fc686e3686352855033c7f8e7d8c24ddfc5fcdae03f: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:37:44.333450 containerd[1566]: time="2025-09-09T00:37:44.333242619Z" level=info msg="CreateContainer within sandbox \"6ddbfeebb82614d3025a6b965a0c1f063a6184c2711c6a8156557b5710b73307\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ad82434bc7b60e2f0b802fc686e3686352855033c7f8e7d8c24ddfc5fcdae03f\"" Sep 9 00:37:44.334078 containerd[1566]: time="2025-09-09T00:37:44.334030164Z" level=info msg="StartContainer for \"ad82434bc7b60e2f0b802fc686e3686352855033c7f8e7d8c24ddfc5fcdae03f\"" Sep 9 00:37:44.338062 containerd[1566]: time="2025-09-09T00:37:44.338022846Z" level=info msg="connecting to shim ad82434bc7b60e2f0b802fc686e3686352855033c7f8e7d8c24ddfc5fcdae03f" address="unix:///run/containerd/s/c305f2155cde64698482434ae1ec0dd193ba528447c46adebd4c93d373621b84" protocol=ttrpc version=3 Sep 9 00:37:44.366101 systemd[1]: Started cri-containerd-ad82434bc7b60e2f0b802fc686e3686352855033c7f8e7d8c24ddfc5fcdae03f.scope - libcontainer container ad82434bc7b60e2f0b802fc686e3686352855033c7f8e7d8c24ddfc5fcdae03f. Sep 9 00:37:44.405859 systemd-networkd[1488]: cali6ac51fcc60f: Link UP Sep 9 00:37:44.407305 systemd-networkd[1488]: cali6ac51fcc60f: Gained carrier Sep 9 00:37:44.426599 containerd[1566]: 2025-09-09 00:37:44.321 [INFO][4829] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--bfkfk-eth0 coredns-7c65d6cfc9- kube-system bfb4a09d-bdfb-4efb-a83c-d7bb472ba089 878 0 2025-09-09 00:36:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-bfkfk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6ac51fcc60f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bfkfk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bfkfk-" Sep 9 00:37:44.426599 containerd[1566]: 2025-09-09 00:37:44.321 [INFO][4829] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bfkfk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bfkfk-eth0" Sep 9 00:37:44.426599 containerd[1566]: 2025-09-09 00:37:44.356 [INFO][4851] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860" HandleID="k8s-pod-network.7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860" Workload="localhost-k8s-coredns--7c65d6cfc9--bfkfk-eth0" Sep 9 00:37:44.426599 containerd[1566]: 2025-09-09 00:37:44.356 [INFO][4851] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860" HandleID="k8s-pod-network.7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860" Workload="localhost-k8s-coredns--7c65d6cfc9--bfkfk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135480), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-bfkfk", "timestamp":"2025-09-09 00:37:44.356305745 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:37:44.426599 containerd[1566]: 2025-09-09 00:37:44.356 [INFO][4851] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:44.426599 containerd[1566]: 2025-09-09 00:37:44.356 [INFO][4851] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:44.426599 containerd[1566]: 2025-09-09 00:37:44.357 [INFO][4851] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:37:44.426599 containerd[1566]: 2025-09-09 00:37:44.364 [INFO][4851] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860" host="localhost" Sep 9 00:37:44.426599 containerd[1566]: 2025-09-09 00:37:44.373 [INFO][4851] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:37:44.426599 containerd[1566]: 2025-09-09 00:37:44.378 [INFO][4851] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:37:44.426599 containerd[1566]: 2025-09-09 00:37:44.380 [INFO][4851] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:44.426599 containerd[1566]: 2025-09-09 00:37:44.384 [INFO][4851] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:44.426599 containerd[1566]: 2025-09-09 00:37:44.384 [INFO][4851] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860" host="localhost" Sep 9 00:37:44.426599 containerd[1566]: 2025-09-09 00:37:44.385 [INFO][4851] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860 Sep 9 00:37:44.426599 containerd[1566]: 2025-09-09 00:37:44.390 [INFO][4851] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860" host="localhost" Sep 9 00:37:44.426599 containerd[1566]: 2025-09-09 00:37:44.399 [INFO][4851] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860" host="localhost" Sep 9 00:37:44.426599 containerd[1566]: 2025-09-09 00:37:44.399 [INFO][4851] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860" host="localhost" Sep 9 00:37:44.426599 containerd[1566]: 2025-09-09 00:37:44.399 [INFO][4851] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:44.426599 containerd[1566]: 2025-09-09 00:37:44.399 [INFO][4851] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860" HandleID="k8s-pod-network.7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860" Workload="localhost-k8s-coredns--7c65d6cfc9--bfkfk-eth0" Sep 9 00:37:44.428028 containerd[1566]: 2025-09-09 00:37:44.403 [INFO][4829] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bfkfk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bfkfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--bfkfk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"bfb4a09d-bdfb-4efb-a83c-d7bb472ba089", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-bfkfk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ac51fcc60f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:44.428028 containerd[1566]: 2025-09-09 00:37:44.403 [INFO][4829] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bfkfk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bfkfk-eth0" Sep 9 00:37:44.428028 containerd[1566]: 2025-09-09 00:37:44.403 [INFO][4829] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ac51fcc60f ContainerID="7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bfkfk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bfkfk-eth0" Sep 9 00:37:44.428028 containerd[1566]: 2025-09-09 00:37:44.406 [INFO][4829] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bfkfk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bfkfk-eth0" Sep 9 00:37:44.428028 containerd[1566]: 2025-09-09 00:37:44.407 [INFO][4829] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bfkfk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bfkfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--bfkfk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"bfb4a09d-bdfb-4efb-a83c-d7bb472ba089", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 36, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860", Pod:"coredns-7c65d6cfc9-bfkfk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ac51fcc60f", MAC:"9e:34:63:ee:c1:9a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:44.428028 containerd[1566]: 2025-09-09 00:37:44.418 [INFO][4829] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bfkfk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--bfkfk-eth0" Sep 9 00:37:44.430434 containerd[1566]: time="2025-09-09T00:37:44.430397445Z" level=info msg="StartContainer for \"ad82434bc7b60e2f0b802fc686e3686352855033c7f8e7d8c24ddfc5fcdae03f\" returns successfully" Sep 9 00:37:44.453081 containerd[1566]: time="2025-09-09T00:37:44.453028356Z" level=info msg="connecting to shim 7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860" address="unix:///run/containerd/s/6da2705dd5214ae00549935f2d96cc457efbf04faa41cecbd84cb8f56fbb0ab0" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:37:44.493690 kubelet[2733]: E0909 00:37:44.493464 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:44.540273 systemd[1]: Started cri-containerd-7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860.scope - libcontainer container 7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860. Sep 9 00:37:44.557503 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:37:44.629629 containerd[1566]: time="2025-09-09T00:37:44.629585676Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27ffa10090c0640cf8917238b91e3029b70483219975e3530fc4d4ee8a2f6804\" id:\"0656b0a813bacb5a9158ae53074558ca1423395e75298cae60d2395202d17938\" pid:4948 exit_status:1 exited_at:{seconds:1757378264 nanos:629029131}" Sep 9 00:37:44.854228 kubelet[2733]: I0909 00:37:44.854151 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-p62qs" podStartSLOduration=53.854114014 podStartE2EDuration="53.854114014s" podCreationTimestamp="2025-09-09 00:36:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:37:44.85128718 +0000 UTC m=+56.682169382" watchObservedRunningTime="2025-09-09 00:37:44.854114014 +0000 UTC m=+56.684996216" Sep 9 00:37:44.983022 containerd[1566]: time="2025-09-09T00:37:44.982958037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bfkfk,Uid:bfb4a09d-bdfb-4efb-a83c-d7bb472ba089,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860\"" Sep 9 00:37:44.984116 kubelet[2733]: E0909 00:37:44.983770 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:44.995743 containerd[1566]: time="2025-09-09T00:37:44.995698932Z" level=info msg="CreateContainer within sandbox \"7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:37:45.395100 systemd-networkd[1488]: calia3cef91e060: Gained IPv6LL Sep 9 00:37:45.494983 kubelet[2733]: E0909 00:37:45.494946 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:45.523078 systemd-networkd[1488]: cali2f5bf3c1053: Gained IPv6LL Sep 9 00:37:45.591632 containerd[1566]: time="2025-09-09T00:37:45.591558226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:45.789261 containerd[1566]: time="2025-09-09T00:37:45.789193679Z" level=info msg="Container 5b0ea9b99f85fb0b30e443faa7237d8e1b5e2d74e419da99d46eaf28bb61597c: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:37:45.816673 containerd[1566]: time="2025-09-09T00:37:45.816564796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 9 00:37:45.824078 containerd[1566]: time="2025-09-09T00:37:45.824012636Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:45.830666 containerd[1566]: time="2025-09-09T00:37:45.830074498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:45.830666 containerd[1566]: time="2025-09-09T00:37:45.830536591Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.45633968s" Sep 9 00:37:45.830666 containerd[1566]: time="2025-09-09T00:37:45.830571557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 9 00:37:45.830821 containerd[1566]: time="2025-09-09T00:37:45.830775096Z" level=info msg="CreateContainer within sandbox \"7f6a125ef0b9cb52c0a8cf777dc0f2dd55f1694777bf148a847cb8f1e7f33860\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b0ea9b99f85fb0b30e443faa7237d8e1b5e2d74e419da99d46eaf28bb61597c\"" Sep 9 00:37:45.831598 containerd[1566]: time="2025-09-09T00:37:45.831533686Z" level=info msg="StartContainer for \"5b0ea9b99f85fb0b30e443faa7237d8e1b5e2d74e419da99d46eaf28bb61597c\"" Sep 9 00:37:45.832691 containerd[1566]: time="2025-09-09T00:37:45.832656111Z" level=info msg="connecting to shim 5b0ea9b99f85fb0b30e443faa7237d8e1b5e2d74e419da99d46eaf28bb61597c" address="unix:///run/containerd/s/6da2705dd5214ae00549935f2d96cc457efbf04faa41cecbd84cb8f56fbb0ab0" protocol=ttrpc version=3 Sep 9 00:37:45.833401 containerd[1566]: time="2025-09-09T00:37:45.833367880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 00:37:45.854432 containerd[1566]: time="2025-09-09T00:37:45.854305394Z" level=info msg="CreateContainer within sandbox \"1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 00:37:45.866312 containerd[1566]: time="2025-09-09T00:37:45.866253393Z" level=info msg="Container 63c0d5c93a447ffdb50a2b31b0e3ccedaece686f02c90c418d8ea7dfdfab4079: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:37:45.871183 systemd[1]: Started cri-containerd-5b0ea9b99f85fb0b30e443faa7237d8e1b5e2d74e419da99d46eaf28bb61597c.scope - libcontainer container 5b0ea9b99f85fb0b30e443faa7237d8e1b5e2d74e419da99d46eaf28bb61597c. Sep 9 00:37:45.874697 containerd[1566]: time="2025-09-09T00:37:45.874657050Z" level=info msg="CreateContainer within sandbox \"1d534b53ed2476dcaf3763c27df4ab4a26924e97c5a7fffb2749e75730997769\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"63c0d5c93a447ffdb50a2b31b0e3ccedaece686f02c90c418d8ea7dfdfab4079\"" Sep 9 00:37:45.876942 containerd[1566]: time="2025-09-09T00:37:45.876543614Z" level=info msg="StartContainer for \"63c0d5c93a447ffdb50a2b31b0e3ccedaece686f02c90c418d8ea7dfdfab4079\"" Sep 9 00:37:45.878462 containerd[1566]: time="2025-09-09T00:37:45.878424237Z" level=info msg="connecting to shim 63c0d5c93a447ffdb50a2b31b0e3ccedaece686f02c90c418d8ea7dfdfab4079" address="unix:///run/containerd/s/8cc693100fe30ed789ab756873a2210276198c86d176d5b5ff1e4923175d95e1" protocol=ttrpc version=3 Sep 9 00:37:45.907072 systemd-networkd[1488]: cali6ac51fcc60f: Gained IPv6LL Sep 9 00:37:45.907135 systemd[1]: Started cri-containerd-63c0d5c93a447ffdb50a2b31b0e3ccedaece686f02c90c418d8ea7dfdfab4079.scope - libcontainer container 63c0d5c93a447ffdb50a2b31b0e3ccedaece686f02c90c418d8ea7dfdfab4079. Sep 9 00:37:45.919271 containerd[1566]: time="2025-09-09T00:37:45.919213893Z" level=info msg="StartContainer for \"5b0ea9b99f85fb0b30e443faa7237d8e1b5e2d74e419da99d46eaf28bb61597c\" returns successfully" Sep 9 00:37:45.974654 containerd[1566]: time="2025-09-09T00:37:45.974597621Z" level=info msg="StartContainer for \"63c0d5c93a447ffdb50a2b31b0e3ccedaece686f02c90c418d8ea7dfdfab4079\" returns successfully" Sep 9 00:37:46.265683 containerd[1566]: time="2025-09-09T00:37:46.265539299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bdgfl,Uid:e2e3f318-b326-4ebf-beea-35cea16bcc19,Namespace:calico-system,Attempt:0,}" Sep 9 00:37:46.372382 systemd-networkd[1488]: cali32b773f1f07: Link UP Sep 9 00:37:46.372566 systemd-networkd[1488]: cali32b773f1f07: Gained carrier Sep 9 00:37:46.390333 containerd[1566]: 2025-09-09 00:37:46.303 [INFO][5070] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--bdgfl-eth0 csi-node-driver- calico-system e2e3f318-b326-4ebf-beea-35cea16bcc19 765 0 2025-09-09 00:37:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-bdgfl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali32b773f1f07 [] [] }} ContainerID="bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168" Namespace="calico-system" Pod="csi-node-driver-bdgfl" WorkloadEndpoint="localhost-k8s-csi--node--driver--bdgfl-" Sep 9 00:37:46.390333 containerd[1566]: 2025-09-09 00:37:46.303 [INFO][5070] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168" Namespace="calico-system" Pod="csi-node-driver-bdgfl" WorkloadEndpoint="localhost-k8s-csi--node--driver--bdgfl-eth0" Sep 9 00:37:46.390333 containerd[1566]: 2025-09-09 00:37:46.330 [INFO][5085] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168" HandleID="k8s-pod-network.bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168" Workload="localhost-k8s-csi--node--driver--bdgfl-eth0" Sep 9 00:37:46.390333 containerd[1566]: 2025-09-09 00:37:46.330 [INFO][5085] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168" HandleID="k8s-pod-network.bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168" Workload="localhost-k8s-csi--node--driver--bdgfl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033ae80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-bdgfl", "timestamp":"2025-09-09 00:37:46.330144641 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:37:46.390333 containerd[1566]: 2025-09-09 00:37:46.330 [INFO][5085] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:37:46.390333 containerd[1566]: 2025-09-09 00:37:46.330 [INFO][5085] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:37:46.390333 containerd[1566]: 2025-09-09 00:37:46.330 [INFO][5085] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:37:46.390333 containerd[1566]: 2025-09-09 00:37:46.337 [INFO][5085] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168" host="localhost" Sep 9 00:37:46.390333 containerd[1566]: 2025-09-09 00:37:46.342 [INFO][5085] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:37:46.390333 containerd[1566]: 2025-09-09 00:37:46.347 [INFO][5085] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:37:46.390333 containerd[1566]: 2025-09-09 00:37:46.349 [INFO][5085] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:46.390333 containerd[1566]: 2025-09-09 00:37:46.353 [INFO][5085] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:37:46.390333 containerd[1566]: 2025-09-09 00:37:46.353 [INFO][5085] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168" host="localhost" Sep 9 00:37:46.390333 containerd[1566]: 2025-09-09 00:37:46.354 [INFO][5085] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168 Sep 9 00:37:46.390333 containerd[1566]: 2025-09-09 00:37:46.359 [INFO][5085] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168" host="localhost" Sep 9 00:37:46.390333 containerd[1566]: 2025-09-09 00:37:46.365 [INFO][5085] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168" host="localhost" Sep 9 00:37:46.390333 containerd[1566]: 2025-09-09 00:37:46.365 [INFO][5085] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168" host="localhost" Sep 9 00:37:46.390333 containerd[1566]: 2025-09-09 00:37:46.365 [INFO][5085] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:37:46.390333 containerd[1566]: 2025-09-09 00:37:46.365 [INFO][5085] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168" HandleID="k8s-pod-network.bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168" Workload="localhost-k8s-csi--node--driver--bdgfl-eth0" Sep 9 00:37:46.391286 containerd[1566]: 2025-09-09 00:37:46.369 [INFO][5070] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168" Namespace="calico-system" Pod="csi-node-driver-bdgfl" WorkloadEndpoint="localhost-k8s-csi--node--driver--bdgfl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bdgfl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e2e3f318-b326-4ebf-beea-35cea16bcc19", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 37, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-bdgfl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali32b773f1f07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:46.391286 containerd[1566]: 2025-09-09 00:37:46.369 [INFO][5070] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168" Namespace="calico-system" Pod="csi-node-driver-bdgfl" WorkloadEndpoint="localhost-k8s-csi--node--driver--bdgfl-eth0" Sep 9 00:37:46.391286 containerd[1566]: 2025-09-09 00:37:46.369 [INFO][5070] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32b773f1f07 ContainerID="bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168" Namespace="calico-system" Pod="csi-node-driver-bdgfl" WorkloadEndpoint="localhost-k8s-csi--node--driver--bdgfl-eth0" Sep 9 00:37:46.391286 containerd[1566]: 2025-09-09 00:37:46.372 [INFO][5070] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168" Namespace="calico-system" Pod="csi-node-driver-bdgfl" WorkloadEndpoint="localhost-k8s-csi--node--driver--bdgfl-eth0" Sep 9 00:37:46.391286 containerd[1566]: 2025-09-09 00:37:46.373 [INFO][5070] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168" Namespace="calico-system" Pod="csi-node-driver-bdgfl" WorkloadEndpoint="localhost-k8s-csi--node--driver--bdgfl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bdgfl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e2e3f318-b326-4ebf-beea-35cea16bcc19", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 37, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168", Pod:"csi-node-driver-bdgfl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali32b773f1f07", MAC:"0e:68:43:ac:9b:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:37:46.391286 containerd[1566]: 2025-09-09 00:37:46.384 [INFO][5070] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168" Namespace="calico-system" Pod="csi-node-driver-bdgfl" WorkloadEndpoint="localhost-k8s-csi--node--driver--bdgfl-eth0" Sep 9 00:37:46.419110 containerd[1566]: time="2025-09-09T00:37:46.419055481Z" level=info msg="connecting to shim bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168" address="unix:///run/containerd/s/ac56bf67ab4571ee39b54c6f58130c3add54bad7091d0e337b2c5d142b0e4b66" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:37:46.463046 systemd[1]: Started cri-containerd-bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168.scope - libcontainer container bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168. Sep 9 00:37:46.478917 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:37:46.502719 kubelet[2733]: E0909 00:37:46.502634 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:46.503816 kubelet[2733]: E0909 00:37:46.503788 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:46.547540 containerd[1566]: time="2025-09-09T00:37:46.547489621Z" level=info msg="TaskExit event in podsandbox handler container_id:\"63c0d5c93a447ffdb50a2b31b0e3ccedaece686f02c90c418d8ea7dfdfab4079\" id:\"049793bd207cb18836528d8f6a32c3df8622ecf51f91e00fa7f422d584378d98\" pid:5159 exited_at:{seconds:1757378266 nanos:546516222}" Sep 9 00:37:46.602007 containerd[1566]: time="2025-09-09T00:37:46.601934538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bdgfl,Uid:e2e3f318-b326-4ebf-beea-35cea16bcc19,Namespace:calico-system,Attempt:0,} returns sandbox id \"bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168\"" Sep 9 00:37:46.667788 kubelet[2733]: I0909 00:37:46.667078 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6fc7d657c6-nwxlc" podStartSLOduration=28.506569169 podStartE2EDuration="38.66702131s" podCreationTimestamp="2025-09-09 00:37:08 +0000 UTC" firstStartedPulling="2025-09-09 00:37:35.671522868 +0000 UTC m=+47.502405070" lastFinishedPulling="2025-09-09 00:37:45.831975009 +0000 UTC m=+57.662857211" observedRunningTime="2025-09-09 00:37:46.666769569 +0000 UTC m=+58.497651801" watchObservedRunningTime="2025-09-09 00:37:46.66702131 +0000 UTC m=+58.497903522" Sep 9 00:37:46.696078 kubelet[2733]: I0909 00:37:46.695951 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-bfkfk" podStartSLOduration=55.695924909 podStartE2EDuration="55.695924909s" podCreationTimestamp="2025-09-09 00:36:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:37:46.695149879 +0000 UTC m=+58.526032071" watchObservedRunningTime="2025-09-09 00:37:46.695924909 +0000 UTC m=+58.526807121" Sep 9 00:37:47.505549 kubelet[2733]: E0909 00:37:47.505497 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:47.763101 systemd-networkd[1488]: cali32b773f1f07: Gained IPv6LL Sep 9 00:37:47.972947 containerd[1566]: time="2025-09-09T00:37:47.972868414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 9 00:37:47.973423 containerd[1566]: time="2025-09-09T00:37:47.973040762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:47.974464 containerd[1566]: time="2025-09-09T00:37:47.974429333Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:47.977232 containerd[1566]: time="2025-09-09T00:37:47.977154706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:47.977889 containerd[1566]: time="2025-09-09T00:37:47.977836096Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.144428249s" Sep 9 00:37:47.977942 containerd[1566]: time="2025-09-09T00:37:47.977868789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 9 00:37:47.978927 containerd[1566]: time="2025-09-09T00:37:47.978896780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:37:47.981156 containerd[1566]: time="2025-09-09T00:37:47.981123161Z" level=info msg="CreateContainer within sandbox \"a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 00:37:47.989785 containerd[1566]: time="2025-09-09T00:37:47.989730011Z" level=info msg="Container d3e5c752eb5ed509dc294972b25a942181ea78f9193861fad2ca63de87b8ecdf: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:37:47.998665 containerd[1566]: time="2025-09-09T00:37:47.998614229Z" level=info msg="CreateContainer within sandbox \"a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d3e5c752eb5ed509dc294972b25a942181ea78f9193861fad2ca63de87b8ecdf\"" Sep 9 00:37:48.000568 containerd[1566]: time="2025-09-09T00:37:47.999390863Z" level=info msg="StartContainer for \"d3e5c752eb5ed509dc294972b25a942181ea78f9193861fad2ca63de87b8ecdf\"" Sep 9 00:37:48.000568 containerd[1566]: time="2025-09-09T00:37:48.000529907Z" level=info msg="connecting to shim d3e5c752eb5ed509dc294972b25a942181ea78f9193861fad2ca63de87b8ecdf" address="unix:///run/containerd/s/304e5cbd85d48ffcd0197f22b27ab5898ac30a231988d47f1984773624862413" protocol=ttrpc version=3 Sep 9 00:37:48.026021 systemd[1]: Started cri-containerd-d3e5c752eb5ed509dc294972b25a942181ea78f9193861fad2ca63de87b8ecdf.scope - libcontainer container d3e5c752eb5ed509dc294972b25a942181ea78f9193861fad2ca63de87b8ecdf. Sep 9 00:37:48.106420 containerd[1566]: time="2025-09-09T00:37:48.106362808Z" level=info msg="StartContainer for \"d3e5c752eb5ed509dc294972b25a942181ea78f9193861fad2ca63de87b8ecdf\" returns successfully" Sep 9 00:37:48.215116 systemd[1]: Started sshd@10-10.0.0.5:22-10.0.0.1:41164.service - OpenSSH per-connection server daemon (10.0.0.1:41164). Sep 9 00:37:48.278348 sshd[5210]: Accepted publickey for core from 10.0.0.1 port 41164 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:37:48.280244 sshd-session[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:48.285008 systemd-logind[1550]: New session 11 of user core. Sep 9 00:37:48.290106 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:37:48.441400 sshd[5214]: Connection closed by 10.0.0.1 port 41164 Sep 9 00:37:48.441804 sshd-session[5210]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:48.446202 systemd[1]: sshd@10-10.0.0.5:22-10.0.0.1:41164.service: Deactivated successfully. Sep 9 00:37:48.449099 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:37:48.452965 systemd-logind[1550]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:37:48.455469 systemd-logind[1550]: Removed session 11. Sep 9 00:37:48.509346 kubelet[2733]: E0909 00:37:48.509310 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:50.996585 containerd[1566]: time="2025-09-09T00:37:50.996514991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:51.015244 containerd[1566]: time="2025-09-09T00:37:50.997357979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 9 00:37:51.015341 containerd[1566]: time="2025-09-09T00:37:50.998511428Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:51.015384 containerd[1566]: time="2025-09-09T00:37:51.001614403Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.022688487s" Sep 9 00:37:51.015420 containerd[1566]: time="2025-09-09T00:37:51.015382242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:37:51.016269 containerd[1566]: time="2025-09-09T00:37:51.016198999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:51.016754 containerd[1566]: time="2025-09-09T00:37:51.016534578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:37:51.018557 containerd[1566]: time="2025-09-09T00:37:51.018515934Z" level=info msg="CreateContainer within sandbox \"d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:37:51.028336 containerd[1566]: time="2025-09-09T00:37:51.028274111Z" level=info msg="Container 6d2015ff1d6e53dcb77905ad2e0e66df624f2915b1a1943a12b114ac62885a09: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:37:51.039761 containerd[1566]: time="2025-09-09T00:37:51.039720726Z" level=info msg="CreateContainer within sandbox \"d8c744b2025e5d795cb2978ae1aae3ed0a103e222669e7367135413ab53de593\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6d2015ff1d6e53dcb77905ad2e0e66df624f2915b1a1943a12b114ac62885a09\"" Sep 9 00:37:51.041152 containerd[1566]: time="2025-09-09T00:37:51.041096598Z" level=info msg="StartContainer for \"6d2015ff1d6e53dcb77905ad2e0e66df624f2915b1a1943a12b114ac62885a09\"" Sep 9 00:37:51.044572 containerd[1566]: time="2025-09-09T00:37:51.044529019Z" level=info msg="connecting to shim 6d2015ff1d6e53dcb77905ad2e0e66df624f2915b1a1943a12b114ac62885a09" address="unix:///run/containerd/s/edc826c165d05739a816c281a79ccfe3ac701bb679dc0650bc7678d83a922dda" protocol=ttrpc version=3 Sep 9 00:37:51.071065 systemd[1]: Started cri-containerd-6d2015ff1d6e53dcb77905ad2e0e66df624f2915b1a1943a12b114ac62885a09.scope - libcontainer container 6d2015ff1d6e53dcb77905ad2e0e66df624f2915b1a1943a12b114ac62885a09. Sep 9 00:37:51.122675 containerd[1566]: time="2025-09-09T00:37:51.122555683Z" level=info msg="StartContainer for \"6d2015ff1d6e53dcb77905ad2e0e66df624f2915b1a1943a12b114ac62885a09\" returns successfully" Sep 9 00:37:51.550551 kubelet[2733]: I0909 00:37:51.549318 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7df6bdd7ff-ldv9n" podStartSLOduration=42.481285629 podStartE2EDuration="50.549297073s" podCreationTimestamp="2025-09-09 00:37:01 +0000 UTC" firstStartedPulling="2025-09-09 00:37:42.948416642 +0000 UTC m=+54.779298854" lastFinishedPulling="2025-09-09 00:37:51.016428095 +0000 UTC m=+62.847310298" observedRunningTime="2025-09-09 00:37:51.54875037 +0000 UTC m=+63.379632572" watchObservedRunningTime="2025-09-09 00:37:51.549297073 +0000 UTC m=+63.380179275" Sep 9 00:37:51.570801 containerd[1566]: time="2025-09-09T00:37:51.570736111Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:51.571894 containerd[1566]: time="2025-09-09T00:37:51.571826890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 00:37:51.580131 containerd[1566]: time="2025-09-09T00:37:51.580065370Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 563.495616ms" Sep 9 00:37:51.580131 containerd[1566]: time="2025-09-09T00:37:51.580112932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:37:51.581435 containerd[1566]: time="2025-09-09T00:37:51.581396037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 00:37:51.583800 containerd[1566]: time="2025-09-09T00:37:51.583769931Z" level=info msg="CreateContainer within sandbox \"cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:37:51.593930 containerd[1566]: time="2025-09-09T00:37:51.593443597Z" level=info msg="Container 6498b454906be009797ce8500b4298d5ea95e4e1df2c45c7ca508a9c62d000f0: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:37:51.604756 containerd[1566]: time="2025-09-09T00:37:51.604698456Z" level=info msg="CreateContainer within sandbox \"cb6a9af8365b4cc5019edd3859fcc6495a614c70e10c9f0a03f2f5cfa2f741b2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6498b454906be009797ce8500b4298d5ea95e4e1df2c45c7ca508a9c62d000f0\"" Sep 9 00:37:51.605329 containerd[1566]: time="2025-09-09T00:37:51.605303289Z" level=info msg="StartContainer for \"6498b454906be009797ce8500b4298d5ea95e4e1df2c45c7ca508a9c62d000f0\"" Sep 9 00:37:51.606401 containerd[1566]: time="2025-09-09T00:37:51.606375872Z" level=info msg="connecting to shim 6498b454906be009797ce8500b4298d5ea95e4e1df2c45c7ca508a9c62d000f0" address="unix:///run/containerd/s/24bfb9d3db4f00e218e5db1f6c37c6717ffff918e7564f0e96a5e1553b62dd7a" protocol=ttrpc version=3 Sep 9 00:37:51.639196 systemd[1]: Started cri-containerd-6498b454906be009797ce8500b4298d5ea95e4e1df2c45c7ca508a9c62d000f0.scope - libcontainer container 6498b454906be009797ce8500b4298d5ea95e4e1df2c45c7ca508a9c62d000f0. Sep 9 00:37:51.693681 containerd[1566]: time="2025-09-09T00:37:51.693641112Z" level=info msg="StartContainer for \"6498b454906be009797ce8500b4298d5ea95e4e1df2c45c7ca508a9c62d000f0\" returns successfully" Sep 9 00:37:52.551751 kubelet[2733]: I0909 00:37:52.551670 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7df6bdd7ff-rmz9h" podStartSLOduration=44.240094254 podStartE2EDuration="51.551643247s" podCreationTimestamp="2025-09-09 00:37:01 +0000 UTC" firstStartedPulling="2025-09-09 00:37:44.26965094 +0000 UTC m=+56.100533142" lastFinishedPulling="2025-09-09 00:37:51.581199943 +0000 UTC m=+63.412082135" observedRunningTime="2025-09-09 00:37:52.55084126 +0000 UTC m=+64.381723472" watchObservedRunningTime="2025-09-09 00:37:52.551643247 +0000 UTC m=+64.382525439" Sep 9 00:37:53.459669 systemd[1]: Started sshd@11-10.0.0.5:22-10.0.0.1:34578.service - OpenSSH per-connection server daemon (10.0.0.1:34578). Sep 9 00:37:53.583981 sshd[5323]: Accepted publickey for core from 10.0.0.1 port 34578 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:37:53.586297 sshd-session[5323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:53.591642 systemd-logind[1550]: New session 12 of user core. Sep 9 00:37:53.600058 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:37:53.910836 sshd[5326]: Connection closed by 10.0.0.1 port 34578 Sep 9 00:37:53.911193 sshd-session[5323]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:53.923931 systemd[1]: sshd@11-10.0.0.5:22-10.0.0.1:34578.service: Deactivated successfully. Sep 9 00:37:53.926195 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:37:53.926994 systemd-logind[1550]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:37:53.930175 systemd[1]: Started sshd@12-10.0.0.5:22-10.0.0.1:34588.service - OpenSSH per-connection server daemon (10.0.0.1:34588). Sep 9 00:37:53.930775 systemd-logind[1550]: Removed session 12. Sep 9 00:37:53.989573 sshd[5340]: Accepted publickey for core from 10.0.0.1 port 34588 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:37:53.991475 sshd-session[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:53.996164 systemd-logind[1550]: New session 13 of user core. Sep 9 00:37:54.002008 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:37:54.295701 sshd[5342]: Connection closed by 10.0.0.1 port 34588 Sep 9 00:37:54.298281 sshd-session[5340]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:54.314741 systemd[1]: sshd@12-10.0.0.5:22-10.0.0.1:34588.service: Deactivated successfully. Sep 9 00:37:54.321863 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:37:54.326957 systemd-logind[1550]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:37:54.332603 systemd[1]: Started sshd@13-10.0.0.5:22-10.0.0.1:34592.service - OpenSSH per-connection server daemon (10.0.0.1:34592). Sep 9 00:37:54.335080 systemd-logind[1550]: Removed session 13. Sep 9 00:37:54.416120 sshd[5357]: Accepted publickey for core from 10.0.0.1 port 34592 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:37:54.418090 sshd-session[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:54.425531 systemd-logind[1550]: New session 14 of user core. Sep 9 00:37:54.431062 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:37:54.597375 sshd[5359]: Connection closed by 10.0.0.1 port 34592 Sep 9 00:37:54.596541 sshd-session[5357]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:54.603132 systemd[1]: sshd@13-10.0.0.5:22-10.0.0.1:34592.service: Deactivated successfully. Sep 9 00:37:54.607633 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:37:54.608782 systemd-logind[1550]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:37:54.611039 systemd-logind[1550]: Removed session 14. Sep 9 00:37:56.265536 kubelet[2733]: E0909 00:37:56.265488 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:56.280142 containerd[1566]: time="2025-09-09T00:37:56.280083035Z" level=info msg="TaskExit event in podsandbox handler container_id:\"63c0d5c93a447ffdb50a2b31b0e3ccedaece686f02c90c418d8ea7dfdfab4079\" id:\"4a16ae57253af95028326f45dc60602232bf9687ea89c2880a87e1ab94444826\" pid:5396 exited_at:{seconds:1757378276 nanos:279571432}" Sep 9 00:37:56.664722 containerd[1566]: time="2025-09-09T00:37:56.664642979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:56.685325 containerd[1566]: time="2025-09-09T00:37:56.685205397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 9 00:37:56.727738 containerd[1566]: time="2025-09-09T00:37:56.727662357Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:56.771437 containerd[1566]: time="2025-09-09T00:37:56.771358494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:37:56.772010 containerd[1566]: time="2025-09-09T00:37:56.771961200Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 5.190525639s" Sep 9 00:37:56.772010 containerd[1566]: time="2025-09-09T00:37:56.772011516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 9 00:37:56.773034 containerd[1566]: time="2025-09-09T00:37:56.772998313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 00:37:56.779590 containerd[1566]: time="2025-09-09T00:37:56.779550317Z" level=info msg="CreateContainer within sandbox \"bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 00:37:56.808732 containerd[1566]: time="2025-09-09T00:37:56.808671067Z" level=info msg="Container 78b520f1fd4597f186e8780854301de7cb8e267d6b6d821cfb968dc173024009: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:37:56.840534 containerd[1566]: time="2025-09-09T00:37:56.840461675Z" level=info msg="CreateContainer within sandbox \"bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"78b520f1fd4597f186e8780854301de7cb8e267d6b6d821cfb968dc173024009\"" Sep 9 00:37:56.841178 containerd[1566]: time="2025-09-09T00:37:56.841126259Z" level=info msg="StartContainer for \"78b520f1fd4597f186e8780854301de7cb8e267d6b6d821cfb968dc173024009\"" Sep 9 00:37:56.843366 containerd[1566]: time="2025-09-09T00:37:56.843314783Z" level=info msg="connecting to shim 78b520f1fd4597f186e8780854301de7cb8e267d6b6d821cfb968dc173024009" address="unix:///run/containerd/s/ac56bf67ab4571ee39b54c6f58130c3add54bad7091d0e337b2c5d142b0e4b66" protocol=ttrpc version=3 Sep 9 00:37:56.876272 systemd[1]: Started cri-containerd-78b520f1fd4597f186e8780854301de7cb8e267d6b6d821cfb968dc173024009.scope - libcontainer container 78b520f1fd4597f186e8780854301de7cb8e267d6b6d821cfb968dc173024009. Sep 9 00:37:56.936105 containerd[1566]: time="2025-09-09T00:37:56.935946525Z" level=info msg="StartContainer for \"78b520f1fd4597f186e8780854301de7cb8e267d6b6d821cfb968dc173024009\" returns successfully" Sep 9 00:37:57.581120 containerd[1566]: time="2025-09-09T00:37:57.581072740Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a581755cfc917a1c026b2467e5b670fefda39565f145b0a6f3643732e34737a3\" id:\"2277481012900d82b77a692197a9aa9d147e2041ead80370f3e0e3bc5916901a\" pid:5451 exited_at:{seconds:1757378277 nanos:580263541}" Sep 9 00:37:59.615751 systemd[1]: Started sshd@14-10.0.0.5:22-10.0.0.1:34596.service - OpenSSH per-connection server daemon (10.0.0.1:34596). Sep 9 00:37:59.700890 sshd[5474]: Accepted publickey for core from 10.0.0.1 port 34596 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:37:59.706004 sshd-session[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:37:59.714988 systemd-logind[1550]: New session 15 of user core. Sep 9 00:37:59.724139 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:37:59.903411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3712814806.mount: Deactivated successfully. Sep 9 00:38:00.006564 sshd[5476]: Connection closed by 10.0.0.1 port 34596 Sep 9 00:38:00.007667 sshd-session[5474]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:00.013742 systemd[1]: sshd@14-10.0.0.5:22-10.0.0.1:34596.service: Deactivated successfully. Sep 9 00:38:00.016684 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:38:00.017945 systemd-logind[1550]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:38:00.020260 systemd-logind[1550]: Removed session 15. Sep 9 00:38:00.274327 containerd[1566]: time="2025-09-09T00:38:00.274268921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:00.275097 containerd[1566]: time="2025-09-09T00:38:00.275050265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 9 00:38:00.276354 containerd[1566]: time="2025-09-09T00:38:00.276314337Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:00.278776 containerd[1566]: time="2025-09-09T00:38:00.278747479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:00.279575 containerd[1566]: time="2025-09-09T00:38:00.279530859Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.506502638s" Sep 9 00:38:00.279575 containerd[1566]: time="2025-09-09T00:38:00.279568129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 9 00:38:00.280914 containerd[1566]: time="2025-09-09T00:38:00.280804448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 00:38:00.282455 containerd[1566]: time="2025-09-09T00:38:00.282428524Z" level=info msg="CreateContainer within sandbox \"a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 00:38:00.296555 containerd[1566]: time="2025-09-09T00:38:00.296508015Z" level=info msg="Container 3bdb9dfa59ca2bd17ee477da99938f8eed8b8b23d54f2d7a0ac1a6e6dfbb5629: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:38:00.305596 containerd[1566]: time="2025-09-09T00:38:00.305557710Z" level=info msg="CreateContainer within sandbox \"a03425670756097b6117deb3db89280678017e634aaef9b1a3593f1e9b03be94\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3bdb9dfa59ca2bd17ee477da99938f8eed8b8b23d54f2d7a0ac1a6e6dfbb5629\"" Sep 9 00:38:00.306358 containerd[1566]: time="2025-09-09T00:38:00.306317574Z" level=info msg="StartContainer for \"3bdb9dfa59ca2bd17ee477da99938f8eed8b8b23d54f2d7a0ac1a6e6dfbb5629\"" Sep 9 00:38:00.307553 containerd[1566]: time="2025-09-09T00:38:00.307523596Z" level=info msg="connecting to shim 3bdb9dfa59ca2bd17ee477da99938f8eed8b8b23d54f2d7a0ac1a6e6dfbb5629" address="unix:///run/containerd/s/304e5cbd85d48ffcd0197f22b27ab5898ac30a231988d47f1984773624862413" protocol=ttrpc version=3 Sep 9 00:38:00.334145 systemd[1]: Started cri-containerd-3bdb9dfa59ca2bd17ee477da99938f8eed8b8b23d54f2d7a0ac1a6e6dfbb5629.scope - libcontainer container 3bdb9dfa59ca2bd17ee477da99938f8eed8b8b23d54f2d7a0ac1a6e6dfbb5629. Sep 9 00:38:00.390255 containerd[1566]: time="2025-09-09T00:38:00.390134148Z" level=info msg="StartContainer for \"3bdb9dfa59ca2bd17ee477da99938f8eed8b8b23d54f2d7a0ac1a6e6dfbb5629\" returns successfully" Sep 9 00:38:00.823588 kubelet[2733]: I0909 00:38:00.823412 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-645556b648-5gjs6" podStartSLOduration=3.2372273910000002 podStartE2EDuration="27.823389989s" podCreationTimestamp="2025-09-09 00:37:33 +0000 UTC" firstStartedPulling="2025-09-09 00:37:35.694524547 +0000 UTC m=+47.525406749" lastFinishedPulling="2025-09-09 00:38:00.280687145 +0000 UTC m=+72.111569347" observedRunningTime="2025-09-09 00:38:00.823249503 +0000 UTC m=+72.654131695" watchObservedRunningTime="2025-09-09 00:38:00.823389989 +0000 UTC m=+72.654272191" Sep 9 00:38:04.456466 containerd[1566]: time="2025-09-09T00:38:04.456366042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:04.457162 containerd[1566]: time="2025-09-09T00:38:04.457128358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 9 00:38:04.458324 containerd[1566]: time="2025-09-09T00:38:04.458289503Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:04.460853 containerd[1566]: time="2025-09-09T00:38:04.460798414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:38:04.461604 containerd[1566]: time="2025-09-09T00:38:04.461513732Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 4.180671472s" Sep 9 00:38:04.461604 containerd[1566]: time="2025-09-09T00:38:04.461569157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 9 00:38:04.463807 containerd[1566]: time="2025-09-09T00:38:04.463778570Z" level=info msg="CreateContainer within sandbox \"bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 00:38:04.475192 containerd[1566]: time="2025-09-09T00:38:04.475128141Z" level=info msg="Container d51e4bb672109437359594df07f56b2d0d6b0aaf50bc9f72fdd556d30c12677c: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:38:04.485826 containerd[1566]: time="2025-09-09T00:38:04.485767525Z" level=info msg="CreateContainer within sandbox \"bba9f350476f6ca9b5bc5da61939d9a0f8364b358e79e2f2bb305d2952af9168\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d51e4bb672109437359594df07f56b2d0d6b0aaf50bc9f72fdd556d30c12677c\"" Sep 9 00:38:04.486380 containerd[1566]: time="2025-09-09T00:38:04.486343428Z" level=info msg="StartContainer for \"d51e4bb672109437359594df07f56b2d0d6b0aaf50bc9f72fdd556d30c12677c\"" Sep 9 00:38:04.487738 containerd[1566]: time="2025-09-09T00:38:04.487707096Z" level=info msg="connecting to shim d51e4bb672109437359594df07f56b2d0d6b0aaf50bc9f72fdd556d30c12677c" address="unix:///run/containerd/s/ac56bf67ab4571ee39b54c6f58130c3add54bad7091d0e337b2c5d142b0e4b66" protocol=ttrpc version=3 Sep 9 00:38:04.514028 systemd[1]: Started cri-containerd-d51e4bb672109437359594df07f56b2d0d6b0aaf50bc9f72fdd556d30c12677c.scope - libcontainer container d51e4bb672109437359594df07f56b2d0d6b0aaf50bc9f72fdd556d30c12677c. Sep 9 00:38:04.570897 containerd[1566]: time="2025-09-09T00:38:04.570778692Z" level=info msg="StartContainer for \"d51e4bb672109437359594df07f56b2d0d6b0aaf50bc9f72fdd556d30c12677c\" returns successfully" Sep 9 00:38:04.602959 containerd[1566]: time="2025-09-09T00:38:04.602910176Z" level=info msg="TaskExit event in podsandbox handler container_id:\"63c0d5c93a447ffdb50a2b31b0e3ccedaece686f02c90c418d8ea7dfdfab4079\" id:\"b6063da35b8af43d14ab4cdc3c22e31da4f3ab2c919c071e41ac515d98ba1356\" pid:5576 exited_at:{seconds:1757378284 nanos:602559672}" Sep 9 00:38:05.031136 systemd[1]: Started sshd@15-10.0.0.5:22-10.0.0.1:55930.service - OpenSSH per-connection server daemon (10.0.0.1:55930). Sep 9 00:38:05.107744 sshd[5590]: Accepted publickey for core from 10.0.0.1 port 55930 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:38:05.109284 sshd-session[5590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:38:05.113672 systemd-logind[1550]: New session 16 of user core. Sep 9 00:38:05.124010 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:38:05.331088 sshd[5592]: Connection closed by 10.0.0.1 port 55930 Sep 9 00:38:05.331570 sshd-session[5590]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:05.335685 systemd[1]: sshd@15-10.0.0.5:22-10.0.0.1:55930.service: Deactivated successfully. Sep 9 00:38:05.337867 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:38:05.338704 systemd-logind[1550]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:38:05.340063 systemd-logind[1550]: Removed session 16. Sep 9 00:38:05.415474 kubelet[2733]: I0909 00:38:05.415434 2733 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 00:38:05.415474 kubelet[2733]: I0909 00:38:05.415469 2733 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 00:38:05.592574 kubelet[2733]: I0909 00:38:05.592116 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bdgfl" podStartSLOduration=39.733017103 podStartE2EDuration="57.592093805s" podCreationTimestamp="2025-09-09 00:37:08 +0000 UTC" firstStartedPulling="2025-09-09 00:37:46.603363879 +0000 UTC m=+58.434246081" lastFinishedPulling="2025-09-09 00:38:04.462440581 +0000 UTC m=+76.293322783" observedRunningTime="2025-09-09 00:38:05.590960043 +0000 UTC m=+77.421842245" watchObservedRunningTime="2025-09-09 00:38:05.592093805 +0000 UTC m=+77.422976007" Sep 9 00:38:06.265830 kubelet[2733]: E0909 00:38:06.265773 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:07.265339 kubelet[2733]: E0909 00:38:07.265279 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:10.347922 systemd[1]: Started sshd@16-10.0.0.5:22-10.0.0.1:41838.service - OpenSSH per-connection server daemon (10.0.0.1:41838). Sep 9 00:38:10.559472 sshd[5609]: Accepted publickey for core from 10.0.0.1 port 41838 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:38:10.560914 sshd-session[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:38:10.565660 systemd-logind[1550]: New session 17 of user core. Sep 9 00:38:10.582043 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:38:10.765801 sshd[5611]: Connection closed by 10.0.0.1 port 41838 Sep 9 00:38:10.766064 sshd-session[5609]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:10.770657 systemd[1]: sshd@16-10.0.0.5:22-10.0.0.1:41838.service: Deactivated successfully. Sep 9 00:38:10.773002 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:38:10.773804 systemd-logind[1550]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:38:10.775759 systemd-logind[1550]: Removed session 17. Sep 9 00:38:13.319262 containerd[1566]: time="2025-09-09T00:38:13.319205351Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27ffa10090c0640cf8917238b91e3029b70483219975e3530fc4d4ee8a2f6804\" id:\"53deb02a181f5caac720b3470348b45baff7966a7da8a59bda9140692be0a8a0\" pid:5636 exited_at:{seconds:1757378293 nanos:318660689}" Sep 9 00:38:15.781866 systemd[1]: Started sshd@17-10.0.0.5:22-10.0.0.1:41848.service - OpenSSH per-connection server daemon (10.0.0.1:41848). Sep 9 00:38:15.838146 sshd[5651]: Accepted publickey for core from 10.0.0.1 port 41848 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:38:15.839610 sshd-session[5651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:38:15.844316 systemd-logind[1550]: New session 18 of user core. Sep 9 00:38:15.853044 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:38:16.091753 sshd[5653]: Connection closed by 10.0.0.1 port 41848 Sep 9 00:38:16.091996 sshd-session[5651]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:16.096087 systemd[1]: sshd@17-10.0.0.5:22-10.0.0.1:41848.service: Deactivated successfully. Sep 9 00:38:16.098408 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:38:16.099315 systemd-logind[1550]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:38:16.100570 systemd-logind[1550]: Removed session 18. Sep 9 00:38:17.265544 kubelet[2733]: E0909 00:38:17.265476 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:21.109139 systemd[1]: Started sshd@18-10.0.0.5:22-10.0.0.1:54822.service - OpenSSH per-connection server daemon (10.0.0.1:54822). Sep 9 00:38:21.183521 sshd[5674]: Accepted publickey for core from 10.0.0.1 port 54822 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:38:21.185373 sshd-session[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:38:21.193021 systemd-logind[1550]: New session 19 of user core. Sep 9 00:38:21.196128 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 00:38:21.431523 sshd[5676]: Connection closed by 10.0.0.1 port 54822 Sep 9 00:38:21.431817 sshd-session[5674]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:21.442252 systemd[1]: sshd@18-10.0.0.5:22-10.0.0.1:54822.service: Deactivated successfully. Sep 9 00:38:21.444376 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:38:21.445429 systemd-logind[1550]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:38:21.449937 systemd[1]: Started sshd@19-10.0.0.5:22-10.0.0.1:54830.service - OpenSSH per-connection server daemon (10.0.0.1:54830). Sep 9 00:38:21.450748 systemd-logind[1550]: Removed session 19. Sep 9 00:38:21.519496 sshd[5689]: Accepted publickey for core from 10.0.0.1 port 54830 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:38:21.522239 sshd-session[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:38:21.530913 systemd-logind[1550]: New session 20 of user core. Sep 9 00:38:21.546067 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 00:38:21.756632 sshd[5693]: Connection closed by 10.0.0.1 port 54830 Sep 9 00:38:21.758132 sshd-session[5689]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:21.766752 systemd[1]: sshd@19-10.0.0.5:22-10.0.0.1:54830.service: Deactivated successfully. Sep 9 00:38:21.768778 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:38:21.769789 systemd-logind[1550]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:38:21.773400 systemd[1]: Started sshd@20-10.0.0.5:22-10.0.0.1:54838.service - OpenSSH per-connection server daemon (10.0.0.1:54838). Sep 9 00:38:21.774540 systemd-logind[1550]: Removed session 20. Sep 9 00:38:21.828301 sshd[5704]: Accepted publickey for core from 10.0.0.1 port 54838 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:38:21.829827 sshd-session[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:38:21.835461 systemd-logind[1550]: New session 21 of user core. Sep 9 00:38:21.840041 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 00:38:23.847287 sshd[5706]: Connection closed by 10.0.0.1 port 54838 Sep 9 00:38:23.850040 sshd-session[5704]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:23.861243 systemd[1]: sshd@20-10.0.0.5:22-10.0.0.1:54838.service: Deactivated successfully. Sep 9 00:38:23.864193 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:38:23.864533 systemd[1]: session-21.scope: Consumed 679ms CPU time, 72.3M memory peak. Sep 9 00:38:23.866604 systemd-logind[1550]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:38:23.872710 systemd[1]: Started sshd@21-10.0.0.5:22-10.0.0.1:54854.service - OpenSSH per-connection server daemon (10.0.0.1:54854). Sep 9 00:38:23.873868 systemd-logind[1550]: Removed session 21. Sep 9 00:38:23.939719 sshd[5727]: Accepted publickey for core from 10.0.0.1 port 54854 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:38:23.941812 sshd-session[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:38:23.948663 systemd-logind[1550]: New session 22 of user core. Sep 9 00:38:23.958080 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 00:38:24.462297 sshd[5729]: Connection closed by 10.0.0.1 port 54854 Sep 9 00:38:24.464119 sshd-session[5727]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:24.477108 systemd[1]: sshd@21-10.0.0.5:22-10.0.0.1:54854.service: Deactivated successfully. Sep 9 00:38:24.480681 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:38:24.482300 systemd-logind[1550]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:38:24.484886 systemd-logind[1550]: Removed session 22. Sep 9 00:38:24.489984 systemd[1]: Started sshd@22-10.0.0.5:22-10.0.0.1:54862.service - OpenSSH per-connection server daemon (10.0.0.1:54862). Sep 9 00:38:24.544784 sshd[5743]: Accepted publickey for core from 10.0.0.1 port 54862 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:38:24.547443 sshd-session[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:38:24.559933 systemd-logind[1550]: New session 23 of user core. Sep 9 00:38:24.563101 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 00:38:24.749008 sshd[5745]: Connection closed by 10.0.0.1 port 54862 Sep 9 00:38:24.749634 sshd-session[5743]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:24.759576 systemd[1]: sshd@22-10.0.0.5:22-10.0.0.1:54862.service: Deactivated successfully. Sep 9 00:38:24.767097 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:38:24.769289 systemd-logind[1550]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:38:24.770973 systemd-logind[1550]: Removed session 23. Sep 9 00:38:26.251303 containerd[1566]: time="2025-09-09T00:38:26.251242457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"63c0d5c93a447ffdb50a2b31b0e3ccedaece686f02c90c418d8ea7dfdfab4079\" id:\"a5e45d520fa91386af4b9c0ca1864b132db4ff4be8ad31ac7c6705b6e9ec5fe9\" pid:5770 exited_at:{seconds:1757378306 nanos:250729166}" Sep 9 00:38:27.584323 containerd[1566]: time="2025-09-09T00:38:27.584247114Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a581755cfc917a1c026b2467e5b670fefda39565f145b0a6f3643732e34737a3\" id:\"8afc3d014bb96d8d1fbbb3b1f314b4241fd18e562bf01871f460250daa11e4c0\" pid:5794 exited_at:{seconds:1757378307 nanos:583778268}" Sep 9 00:38:29.764935 systemd[1]: Started sshd@23-10.0.0.5:22-10.0.0.1:54878.service - OpenSSH per-connection server daemon (10.0.0.1:54878). Sep 9 00:38:29.813812 sshd[5808]: Accepted publickey for core from 10.0.0.1 port 54878 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:38:29.815663 sshd-session[5808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:38:29.820815 systemd-logind[1550]: New session 24 of user core. Sep 9 00:38:29.831036 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 00:38:29.954033 sshd[5812]: Connection closed by 10.0.0.1 port 54878 Sep 9 00:38:29.954410 sshd-session[5808]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:29.959339 systemd[1]: sshd@23-10.0.0.5:22-10.0.0.1:54878.service: Deactivated successfully. Sep 9 00:38:29.961552 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 00:38:29.962497 systemd-logind[1550]: Session 24 logged out. Waiting for processes to exit. Sep 9 00:38:29.963764 systemd-logind[1550]: Removed session 24. Sep 9 00:38:34.967311 systemd[1]: Started sshd@24-10.0.0.5:22-10.0.0.1:53828.service - OpenSSH per-connection server daemon (10.0.0.1:53828). Sep 9 00:38:35.024710 sshd[5825]: Accepted publickey for core from 10.0.0.1 port 53828 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:38:35.026599 sshd-session[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:38:35.031310 systemd-logind[1550]: New session 25 of user core. Sep 9 00:38:35.036150 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 00:38:35.160016 sshd[5827]: Connection closed by 10.0.0.1 port 53828 Sep 9 00:38:35.160399 sshd-session[5825]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:35.165327 systemd[1]: sshd@24-10.0.0.5:22-10.0.0.1:53828.service: Deactivated successfully. Sep 9 00:38:35.167626 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 00:38:35.168635 systemd-logind[1550]: Session 25 logged out. Waiting for processes to exit. Sep 9 00:38:35.170202 systemd-logind[1550]: Removed session 25. Sep 9 00:38:35.406507 containerd[1566]: time="2025-09-09T00:38:35.406453683Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27ffa10090c0640cf8917238b91e3029b70483219975e3530fc4d4ee8a2f6804\" id:\"766d03eef6d06aa1c39c7601a44f4b1ec841e5803df0445b5e701198aa627110\" pid:5852 exited_at:{seconds:1757378315 nanos:406081220}" Sep 9 00:38:40.171960 systemd[1]: Started sshd@25-10.0.0.5:22-10.0.0.1:51906.service - OpenSSH per-connection server daemon (10.0.0.1:51906). Sep 9 00:38:40.235581 sshd[5864]: Accepted publickey for core from 10.0.0.1 port 51906 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:38:40.237447 sshd-session[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:38:40.242899 systemd-logind[1550]: New session 26 of user core. Sep 9 00:38:40.251006 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 00:38:40.265964 kubelet[2733]: E0909 00:38:40.265927 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:40.437653 sshd[5866]: Connection closed by 10.0.0.1 port 51906 Sep 9 00:38:40.440075 sshd-session[5864]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:40.445706 systemd-logind[1550]: Session 26 logged out. Waiting for processes to exit. Sep 9 00:38:40.446740 systemd[1]: sshd@25-10.0.0.5:22-10.0.0.1:51906.service: Deactivated successfully. Sep 9 00:38:40.449917 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 00:38:40.454208 systemd-logind[1550]: Removed session 26. Sep 9 00:38:43.317451 containerd[1566]: time="2025-09-09T00:38:43.317321194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27ffa10090c0640cf8917238b91e3029b70483219975e3530fc4d4ee8a2f6804\" id:\"69e9d434506f78cb85bc3edb76c0b96207d3e6fd57e4662784cd2c92e6a12160\" pid:5890 exited_at:{seconds:1757378323 nanos:316951897}" Sep 9 00:38:45.454492 systemd[1]: Started sshd@26-10.0.0.5:22-10.0.0.1:51910.service - OpenSSH per-connection server daemon (10.0.0.1:51910). Sep 9 00:38:45.521668 sshd[5903]: Accepted publickey for core from 10.0.0.1 port 51910 ssh2: RSA SHA256:9cKk/hvE0sTUMcUFe7hzJ+6fDgKCSIPiMvH38LMZpLs Sep 9 00:38:45.523761 sshd-session[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:38:45.529553 systemd-logind[1550]: New session 27 of user core. Sep 9 00:38:45.537204 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 00:38:45.662582 sshd[5906]: Connection closed by 10.0.0.1 port 51910 Sep 9 00:38:45.663006 sshd-session[5903]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:45.668091 systemd[1]: sshd@26-10.0.0.5:22-10.0.0.1:51910.service: Deactivated successfully. Sep 9 00:38:45.670407 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 00:38:45.671255 systemd-logind[1550]: Session 27 logged out. Waiting for processes to exit. Sep 9 00:38:45.672807 systemd-logind[1550]: Removed session 27.