Sep 13 00:15:58.947552 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:15:39 -00 2025 Sep 13 00:15:58.947599 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=21b29c6e420cf06e0546ff797fc1285d986af130e4ba1abb9f27cb6343b53294 Sep 13 00:15:58.947615 kernel: BIOS-provided physical RAM map: Sep 13 00:15:58.947624 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:15:58.947632 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 13 00:15:58.947640 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 13 00:15:58.947649 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 13 00:15:58.947658 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 13 00:15:58.947672 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 13 00:15:58.947680 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 13 00:15:58.947689 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 13 00:15:58.947697 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 13 00:15:58.947705 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 13 00:15:58.947714 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 13 00:15:58.947727 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 13 00:15:58.947746 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 13 00:15:58.947755 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 13 00:15:58.947764 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 13 00:15:58.947772 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 13 00:15:58.947781 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 13 00:15:58.947790 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 13 00:15:58.947799 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 13 00:15:58.947807 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 13 00:15:58.947816 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:15:58.947825 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 13 00:15:58.947836 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 00:15:58.947845 kernel: NX (Execute Disable) protection: active Sep 13 00:15:58.947854 kernel: APIC: Static calls initialized Sep 13 00:15:58.947863 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 13 00:15:58.947872 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 13 00:15:58.947881 kernel: extended physical RAM map: Sep 13 00:15:58.947889 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:15:58.947898 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 13 00:15:58.947907 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 13 00:15:58.947933 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 13 00:15:58.947942 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 13 00:15:58.947953 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 13 00:15:58.947962 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 13 00:15:58.947971 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 13 00:15:58.947980 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 13 00:15:58.947993 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 13 00:15:58.948002 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 13 00:15:58.948014 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 13 00:15:58.948023 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 13 00:15:58.948033 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 13 00:15:58.948042 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 13 00:15:58.948051 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 13 00:15:58.948060 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 13 00:15:58.948070 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 13 00:15:58.948079 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 13 00:15:58.948088 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 13 00:15:58.948100 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 13 00:15:58.948109 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 13 00:15:58.948118 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 13 00:15:58.948127 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 13 00:15:58.948136 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:15:58.948146 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 13 00:15:58.948155 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 00:15:58.948167 kernel: efi: EFI v2.7 by EDK II Sep 13 00:15:58.948176 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 13 00:15:58.948185 kernel: random: crng init done Sep 13 00:15:58.948195 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 13 00:15:58.948204 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 13 00:15:58.948215 kernel: secureboot: Secure boot disabled Sep 13 00:15:58.948225 kernel: SMBIOS 2.8 present. Sep 13 00:15:58.948234 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 13 00:15:58.948243 kernel: DMI: Memory slots populated: 1/1 Sep 13 00:15:58.948252 kernel: Hypervisor detected: KVM Sep 13 00:15:58.948261 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:15:58.948271 kernel: kvm-clock: using sched offset of 9607026896 cycles Sep 13 00:15:58.948281 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:15:58.948291 kernel: tsc: Detected 2794.748 MHz processor Sep 13 00:15:58.948300 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:15:58.948312 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:15:58.948322 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 13 00:15:58.948331 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 13 00:15:58.948341 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:15:58.948351 kernel: Using GB pages for direct mapping Sep 13 00:15:58.948360 kernel: ACPI: Early table checksum verification disabled Sep 13 00:15:58.948370 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 13 00:15:58.948380 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:15:58.948389 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:58.948401 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:58.948411 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 13 00:15:58.948420 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:58.948430 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:58.948439 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:58.948449 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:58.948459 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 13 00:15:58.948468 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 13 00:15:58.948478 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 13 00:15:58.948489 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 13 00:15:58.948499 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 13 00:15:58.948509 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 13 00:15:58.948518 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 13 00:15:58.948527 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 13 00:15:58.948537 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 13 00:15:58.948546 kernel: No NUMA configuration found Sep 13 00:15:58.948556 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 13 00:15:58.948565 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 13 00:15:58.948577 kernel: Zone ranges: Sep 13 00:15:58.948587 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:15:58.948602 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 13 00:15:58.948611 kernel: Normal empty Sep 13 00:15:58.948620 kernel: Device empty Sep 13 00:15:58.948630 kernel: Movable zone start for each node Sep 13 00:15:58.948639 kernel: Early memory node ranges Sep 13 00:15:58.948649 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:15:58.948658 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 13 00:15:58.948667 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 13 00:15:58.948679 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 13 00:15:58.948689 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 13 00:15:58.948698 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 13 00:15:58.948708 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 13 00:15:58.948717 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 13 00:15:58.948726 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 13 00:15:58.948750 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:15:58.948760 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:15:58.948789 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 13 00:15:58.948803 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:15:58.948816 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 13 00:15:58.948837 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 13 00:15:58.948858 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 13 00:15:58.948868 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 13 00:15:58.948878 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 13 00:15:58.948888 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:15:58.948898 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:15:58.948923 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:15:58.948946 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:15:58.948956 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:15:58.948966 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:15:58.948976 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:15:58.948986 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:15:58.948996 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:15:58.949006 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:15:58.949019 kernel: TSC deadline timer available Sep 13 00:15:58.949029 kernel: CPU topo: Max. logical packages: 1 Sep 13 00:15:58.949039 kernel: CPU topo: Max. logical dies: 1 Sep 13 00:15:58.949048 kernel: CPU topo: Max. dies per package: 1 Sep 13 00:15:58.949058 kernel: CPU topo: Max. threads per core: 1 Sep 13 00:15:58.949068 kernel: CPU topo: Num. cores per package: 4 Sep 13 00:15:58.949078 kernel: CPU topo: Num. threads per package: 4 Sep 13 00:15:58.949088 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 13 00:15:58.949098 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:15:58.949108 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 00:15:58.949120 kernel: kvm-guest: setup PV sched yield Sep 13 00:15:58.949130 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 13 00:15:58.949140 kernel: Booting paravirtualized kernel on KVM Sep 13 00:15:58.949151 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:15:58.949161 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 13 00:15:58.949171 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 13 00:15:58.949194 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 13 00:15:58.949214 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 00:15:58.949224 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:15:58.949237 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:15:58.949249 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=21b29c6e420cf06e0546ff797fc1285d986af130e4ba1abb9f27cb6343b53294 Sep 13 00:15:58.949259 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:15:58.949269 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:15:58.949279 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:15:58.949289 kernel: Fallback order for Node 0: 0 Sep 13 00:15:58.949299 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 13 00:15:58.949309 kernel: Policy zone: DMA32 Sep 13 00:15:58.949322 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:15:58.949332 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:15:58.949342 kernel: ftrace: allocating 40122 entries in 157 pages Sep 13 00:15:58.949352 kernel: ftrace: allocated 157 pages with 5 groups Sep 13 00:15:58.949362 kernel: Dynamic Preempt: voluntary Sep 13 00:15:58.949371 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:15:58.949382 kernel: rcu: RCU event tracing is enabled. Sep 13 00:15:58.949393 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:15:58.949403 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:15:58.949415 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:15:58.949425 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:15:58.949435 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:15:58.949448 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:15:58.949458 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:15:58.949468 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:15:58.949478 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:15:58.949488 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 00:15:58.949498 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:15:58.949511 kernel: Console: colour dummy device 80x25 Sep 13 00:15:58.949521 kernel: printk: legacy console [ttyS0] enabled Sep 13 00:15:58.949531 kernel: ACPI: Core revision 20240827 Sep 13 00:15:58.949541 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:15:58.949551 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:15:58.949561 kernel: x2apic enabled Sep 13 00:15:58.949571 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:15:58.949580 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 13 00:15:58.949590 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 13 00:15:58.949603 kernel: kvm-guest: setup PV IPIs Sep 13 00:15:58.949613 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:15:58.949623 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 13 00:15:58.949633 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 13 00:15:58.949643 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:15:58.949653 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:15:58.949663 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:15:58.949673 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:15:58.949683 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:15:58.949696 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:15:58.949705 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:15:58.949715 kernel: active return thunk: retbleed_return_thunk Sep 13 00:15:58.949725 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:15:58.949743 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:15:58.949754 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:15:58.949764 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 13 00:15:58.949774 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 13 00:15:58.949787 kernel: active return thunk: srso_return_thunk Sep 13 00:15:58.949797 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 13 00:15:58.949807 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:15:58.949817 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:15:58.949827 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:15:58.949836 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:15:58.949847 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 00:15:58.949857 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:15:58.949866 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:15:58.949879 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 13 00:15:58.949889 kernel: landlock: Up and running. Sep 13 00:15:58.949898 kernel: SELinux: Initializing. Sep 13 00:15:58.949908 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:15:58.949934 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:15:58.949944 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:15:58.949953 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:15:58.949963 kernel: ... version: 0 Sep 13 00:15:58.949973 kernel: ... bit width: 48 Sep 13 00:15:58.949986 kernel: ... generic registers: 6 Sep 13 00:15:58.949996 kernel: ... value mask: 0000ffffffffffff Sep 13 00:15:58.950006 kernel: ... max period: 00007fffffffffff Sep 13 00:15:58.950015 kernel: ... fixed-purpose events: 0 Sep 13 00:15:58.950025 kernel: ... event mask: 000000000000003f Sep 13 00:15:58.950035 kernel: signal: max sigframe size: 1776 Sep 13 00:15:58.950045 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:15:58.950058 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:15:58.950093 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 13 00:15:58.950106 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:15:58.950116 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:15:58.950126 kernel: .... node #0, CPUs: #1 #2 #3 Sep 13 00:15:58.950136 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:15:58.950146 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 13 00:15:58.950156 kernel: Memory: 2424720K/2565800K available (14336K kernel code, 2432K rwdata, 9960K rodata, 53828K init, 1088K bss, 135152K reserved, 0K cma-reserved) Sep 13 00:15:58.950166 kernel: devtmpfs: initialized Sep 13 00:15:58.950176 kernel: x86/mm: Memory block size: 128MB Sep 13 00:15:58.950186 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 13 00:15:58.950198 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 13 00:15:58.950209 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 13 00:15:58.950219 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 13 00:15:58.950229 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 13 00:15:58.950239 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 13 00:15:58.950249 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:15:58.950259 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:15:58.950269 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:15:58.950278 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:15:58.950291 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:15:58.950301 kernel: audit: type=2000 audit(1757722552.711:1): state=initialized audit_enabled=0 res=1 Sep 13 00:15:58.950311 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:15:58.950321 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:15:58.950331 kernel: cpuidle: using governor menu Sep 13 00:15:58.950340 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:15:58.950350 kernel: dca service started, version 1.12.1 Sep 13 00:15:58.950360 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 13 00:15:58.950370 kernel: PCI: Using configuration type 1 for base access Sep 13 00:15:58.950383 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:15:58.950392 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:15:58.950402 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:15:58.950412 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:15:58.950422 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:15:58.950432 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:15:58.950442 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:15:58.950452 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:15:58.950461 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:15:58.950474 kernel: ACPI: Interpreter enabled Sep 13 00:15:58.950484 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:15:58.950494 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:15:58.950504 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:15:58.950513 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:15:58.950523 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:15:58.950533 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:15:58.950823 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:15:58.951002 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:15:58.951143 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:15:58.951156 kernel: PCI host bridge to bus 0000:00 Sep 13 00:15:58.951322 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:15:58.951452 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:15:58.951592 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:15:58.951721 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 13 00:15:58.951868 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 13 00:15:58.952023 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 13 00:15:58.952154 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:15:58.952328 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 13 00:15:58.952495 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 13 00:15:58.952636 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 13 00:15:58.952795 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 13 00:15:58.952964 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 13 00:15:58.953106 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:15:58.953271 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 13 00:15:58.953430 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 13 00:15:58.953962 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 13 00:15:58.954149 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 13 00:15:58.954346 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 13 00:15:58.954490 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 13 00:15:58.954631 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 13 00:15:58.954783 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 13 00:15:58.954976 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 13 00:15:58.955191 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 13 00:15:58.955348 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 13 00:15:58.955494 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 13 00:15:58.955633 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 13 00:15:58.955808 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 13 00:15:58.955971 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:15:58.956138 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 13 00:15:58.956279 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 13 00:15:58.956424 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 13 00:15:58.956581 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 13 00:15:58.956722 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 13 00:15:58.956746 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:15:58.956756 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:15:58.956766 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:15:58.956776 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:15:58.956786 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:15:58.956800 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:15:58.956810 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:15:58.956820 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:15:58.956830 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:15:58.956840 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:15:58.956850 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:15:58.956859 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:15:58.956869 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:15:58.956879 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:15:58.956892 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:15:58.956901 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:15:58.956940 kernel: iommu: Default domain type: Translated Sep 13 00:15:58.956950 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:15:58.956960 kernel: efivars: Registered efivars operations Sep 13 00:15:58.956970 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:15:58.956979 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:15:58.956989 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 13 00:15:58.956999 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 13 00:15:58.957011 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 13 00:15:58.957021 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 13 00:15:58.957031 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 13 00:15:58.957040 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 13 00:15:58.957050 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 13 00:15:58.957060 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 13 00:15:58.957205 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:15:58.957345 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:15:58.957487 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:15:58.957499 kernel: vgaarb: loaded Sep 13 00:15:58.957509 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:15:58.957519 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:15:58.957529 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:15:58.957539 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:15:58.957549 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:15:58.957559 kernel: pnp: PnP ACPI init Sep 13 00:15:58.957766 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 13 00:15:58.957788 kernel: pnp: PnP ACPI: found 6 devices Sep 13 00:15:58.957799 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:15:58.957809 kernel: NET: Registered PF_INET protocol family Sep 13 00:15:58.957819 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:15:58.957830 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:15:58.957841 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:15:58.957851 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:15:58.957861 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 00:15:58.957874 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:15:58.957885 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:15:58.957895 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:15:58.957906 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:15:58.957936 kernel: NET: Registered PF_XDP protocol family Sep 13 00:15:58.958113 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 13 00:15:58.958257 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 13 00:15:58.958385 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:15:58.958517 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:15:58.958644 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:15:58.958782 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 13 00:15:58.958926 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 13 00:15:58.959056 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 13 00:15:58.959069 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:15:58.959080 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 13 00:15:58.959091 kernel: Initialise system trusted keyrings Sep 13 00:15:58.959105 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:15:58.959116 kernel: Key type asymmetric registered Sep 13 00:15:58.959126 kernel: Asymmetric key parser 'x509' registered Sep 13 00:15:58.959136 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 00:15:58.959146 kernel: io scheduler mq-deadline registered Sep 13 00:15:58.959157 kernel: io scheduler kyber registered Sep 13 00:15:58.959170 kernel: io scheduler bfq registered Sep 13 00:15:58.959180 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:15:58.959191 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:15:58.959201 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:15:58.959212 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 00:15:58.959222 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:15:58.959233 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:15:58.959243 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:15:58.959253 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:15:58.959266 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:15:58.959428 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 00:15:58.959575 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 00:15:58.959709 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T00:15:58 UTC (1757722558) Sep 13 00:15:58.959853 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 13 00:15:58.959866 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 13 00:15:58.959877 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 13 00:15:58.959887 kernel: efifb: probing for efifb Sep 13 00:15:58.959902 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 13 00:15:58.959929 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 13 00:15:58.959939 kernel: efifb: scrolling: redraw Sep 13 00:15:58.959964 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:15:58.959974 kernel: Console: switching to colour frame buffer device 160x50 Sep 13 00:15:58.959984 kernel: fb0: EFI VGA frame buffer device Sep 13 00:15:58.959995 kernel: pstore: Using crash dump compression: deflate Sep 13 00:15:58.960005 kernel: pstore: Registered efi_pstore as persistent store backend Sep 13 00:15:58.960016 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:15:58.960029 kernel: Segment Routing with IPv6 Sep 13 00:15:58.960040 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:15:58.960050 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:15:58.960060 kernel: Key type dns_resolver registered Sep 13 00:15:58.960070 kernel: IPI shorthand broadcast: enabled Sep 13 00:15:58.960081 kernel: sched_clock: Marking stable (6595003267, 186728119)->(6855743375, -74011989) Sep 13 00:15:58.960091 kernel: registered taskstats version 1 Sep 13 00:15:58.960101 kernel: Loading compiled-in X.509 certificates Sep 13 00:15:58.960111 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: dd6b45f5ed9ac8d42d60bdb17f83ef06c8bcd8f6' Sep 13 00:15:58.960122 kernel: Demotion targets for Node 0: null Sep 13 00:15:58.960134 kernel: Key type .fscrypt registered Sep 13 00:15:58.960144 kernel: Key type fscrypt-provisioning registered Sep 13 00:15:58.960154 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:15:58.960165 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:15:58.960175 kernel: ima: No architecture policies found Sep 13 00:15:58.960185 kernel: clk: Disabling unused clocks Sep 13 00:15:58.960195 kernel: Warning: unable to open an initial console. Sep 13 00:15:58.960206 kernel: Freeing unused kernel image (initmem) memory: 53828K Sep 13 00:15:58.960218 kernel: Write protecting the kernel read-only data: 24576k Sep 13 00:15:58.960228 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 13 00:15:58.960239 kernel: Run /init as init process Sep 13 00:15:58.960249 kernel: with arguments: Sep 13 00:15:58.960259 kernel: /init Sep 13 00:15:58.960269 kernel: with environment: Sep 13 00:15:58.960279 kernel: HOME=/ Sep 13 00:15:58.960289 kernel: TERM=linux Sep 13 00:15:58.960299 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:15:58.960315 systemd[1]: Successfully made /usr/ read-only. Sep 13 00:15:58.960332 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 00:15:58.960344 systemd[1]: Detected virtualization kvm. Sep 13 00:15:58.960354 systemd[1]: Detected architecture x86-64. Sep 13 00:15:58.960365 systemd[1]: Running in initrd. Sep 13 00:15:58.960375 systemd[1]: No hostname configured, using default hostname. Sep 13 00:15:58.960387 systemd[1]: Hostname set to . Sep 13 00:15:58.960400 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:15:58.960411 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:15:58.960424 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:15:58.960436 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:15:58.960447 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:15:58.960458 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:15:58.960469 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:15:58.960481 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:15:58.960497 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:15:58.960511 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:15:58.960522 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:15:58.960535 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:15:58.960546 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:15:58.960557 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:15:58.960568 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:15:58.960579 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:15:58.960593 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:15:58.960604 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:15:58.960615 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:15:58.960626 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 13 00:15:58.960637 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:15:58.960648 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:15:58.960660 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:15:58.960671 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:15:58.960684 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:15:58.960695 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:15:58.960706 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:15:58.960718 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 13 00:15:58.960729 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:15:58.960750 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:15:58.960761 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:15:58.960772 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:15:58.960785 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:15:58.960797 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:15:58.960808 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:15:58.960819 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:15:58.960927 systemd-journald[219]: Collecting audit messages is disabled. Sep 13 00:15:58.960957 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:15:58.960969 systemd-journald[219]: Journal started Sep 13 00:15:58.960998 systemd-journald[219]: Runtime Journal (/run/log/journal/4fa8da2dc8c546e3b44c0dde4172668a) is 6M, max 48.5M, 42.4M free. Sep 13 00:15:58.951970 systemd-modules-load[221]: Inserted module 'overlay' Sep 13 00:15:58.963160 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:15:58.969604 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:15:58.975382 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:15:58.977410 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:15:58.983018 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:15:58.987963 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:15:58.990777 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 13 00:15:58.991806 kernel: Bridge firewalling registered Sep 13 00:15:58.996199 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:15:59.001097 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:15:59.005301 systemd-tmpfiles[240]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 13 00:15:59.006150 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:15:59.011864 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:15:59.019344 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:15:59.021470 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:15:59.039118 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:15:59.040574 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:15:59.063613 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=21b29c6e420cf06e0546ff797fc1285d986af130e4ba1abb9f27cb6343b53294 Sep 13 00:15:59.089012 systemd-resolved[255]: Positive Trust Anchors: Sep 13 00:15:59.089036 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:15:59.089079 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:15:59.092242 systemd-resolved[255]: Defaulting to hostname 'linux'. Sep 13 00:15:59.093715 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:15:59.099826 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:15:59.201976 kernel: SCSI subsystem initialized Sep 13 00:15:59.213968 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:15:59.227969 kernel: iscsi: registered transport (tcp) Sep 13 00:15:59.255157 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:15:59.255214 kernel: QLogic iSCSI HBA Driver Sep 13 00:15:59.283106 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:15:59.318854 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:15:59.321958 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:15:59.412661 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:15:59.415804 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:15:59.501012 kernel: raid6: avx2x4 gen() 20957 MB/s Sep 13 00:15:59.517983 kernel: raid6: avx2x2 gen() 20712 MB/s Sep 13 00:15:59.535174 kernel: raid6: avx2x1 gen() 17471 MB/s Sep 13 00:15:59.535267 kernel: raid6: using algorithm avx2x4 gen() 20957 MB/s Sep 13 00:15:59.553198 kernel: raid6: .... xor() 6077 MB/s, rmw enabled Sep 13 00:15:59.553306 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:15:59.578973 kernel: xor: automatically using best checksumming function avx Sep 13 00:15:59.911202 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:15:59.924453 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:15:59.929319 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:15:59.975411 systemd-udevd[470]: Using default interface naming scheme 'v255'. Sep 13 00:15:59.983499 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:15:59.986896 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:16:00.019307 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Sep 13 00:16:00.062131 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:16:00.070031 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:16:00.190524 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:16:00.195346 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:16:00.269946 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 13 00:16:00.278589 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:16:00.282263 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:16:00.284941 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:16:00.296267 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:16:00.296408 kernel: GPT:9289727 != 19775487 Sep 13 00:16:00.296462 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:16:00.296530 kernel: GPT:9289727 != 19775487 Sep 13 00:16:00.296573 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:16:00.296616 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:16:00.295439 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:16:00.295757 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:16:00.302799 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:16:00.306952 kernel: AES CTR mode by8 optimization enabled Sep 13 00:16:00.307027 kernel: libata version 3.00 loaded. Sep 13 00:16:00.308351 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:16:00.312648 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 13 00:16:00.335164 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:16:00.335626 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:16:00.340463 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 13 00:16:00.340765 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 13 00:16:00.341013 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:16:00.369074 kernel: scsi host0: ahci Sep 13 00:16:00.371260 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:16:00.375177 kernel: scsi host1: ahci Sep 13 00:16:00.375445 kernel: scsi host2: ahci Sep 13 00:16:00.379940 kernel: scsi host3: ahci Sep 13 00:16:00.383980 kernel: scsi host4: ahci Sep 13 00:16:00.386888 kernel: scsi host5: ahci Sep 13 00:16:00.388131 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 13 00:16:00.388153 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 13 00:16:00.388166 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 13 00:16:00.390445 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 13 00:16:00.390469 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 13 00:16:00.391686 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 13 00:16:00.392538 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 00:16:00.418766 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 00:16:00.440588 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:16:00.451720 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 00:16:00.457465 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 00:16:00.461641 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:16:00.581592 disk-uuid[631]: Primary Header is updated. Sep 13 00:16:00.581592 disk-uuid[631]: Secondary Entries is updated. Sep 13 00:16:00.581592 disk-uuid[631]: Secondary Header is updated. Sep 13 00:16:00.589035 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:16:00.595049 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:16:00.698957 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:16:00.701125 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 00:16:00.701157 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:16:00.703103 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:16:00.703134 kernel: ata3.00: LPM support broken, forcing max_power Sep 13 00:16:00.703148 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:16:00.703480 kernel: ata3.00: applying bridge limits Sep 13 00:16:00.704944 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:16:00.704969 kernel: ata3.00: LPM support broken, forcing max_power Sep 13 00:16:00.706143 kernel: ata3.00: configured for UDMA/100 Sep 13 00:16:00.706953 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:16:00.709933 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:16:00.761478 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:16:00.763068 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:16:00.788942 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:16:01.175412 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:16:01.177611 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:16:01.179735 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:16:01.181077 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:16:01.184856 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:16:01.225269 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:16:01.595942 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:16:01.596690 disk-uuid[632]: The operation has completed successfully. Sep 13 00:16:01.647010 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:16:01.647188 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:16:01.682114 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:16:01.707967 sh[661]: Success Sep 13 00:16:01.727962 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:16:01.728042 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:16:01.729795 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 13 00:16:01.741954 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 13 00:16:01.787315 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:16:01.791745 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:16:01.830212 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:16:01.836584 kernel: BTRFS: device fsid ca815b72-c68a-4b5e-8622-cfb6842bab47 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (673) Sep 13 00:16:01.836621 kernel: BTRFS info (device dm-0): first mount of filesystem ca815b72-c68a-4b5e-8622-cfb6842bab47 Sep 13 00:16:01.836638 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:16:01.852980 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:16:01.853071 kernel: BTRFS info (device dm-0): enabling free space tree Sep 13 00:16:01.854799 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:16:01.856536 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 13 00:16:01.858162 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:16:01.859167 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:16:01.861123 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:16:01.904958 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (706) Sep 13 00:16:01.908450 kernel: BTRFS info (device vda6): first mount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 00:16:01.908519 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:16:01.913109 kernel: BTRFS info (device vda6): turning on async discard Sep 13 00:16:01.913139 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 00:16:01.921159 kernel: BTRFS info (device vda6): last unmount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 00:16:01.930371 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:16:01.934092 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:16:02.201004 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:16:02.206000 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:16:02.294813 ignition[757]: Ignition 2.21.0 Sep 13 00:16:02.294832 ignition[757]: Stage: fetch-offline Sep 13 00:16:02.294876 ignition[757]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:16:02.294886 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:16:02.294998 ignition[757]: parsed url from cmdline: "" Sep 13 00:16:02.295003 ignition[757]: no config URL provided Sep 13 00:16:02.295009 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:16:02.295019 ignition[757]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:16:02.295046 ignition[757]: op(1): [started] loading QEMU firmware config module Sep 13 00:16:02.295052 ignition[757]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:16:02.307275 ignition[757]: op(1): [finished] loading QEMU firmware config module Sep 13 00:16:02.331076 systemd-networkd[849]: lo: Link UP Sep 13 00:16:02.331091 systemd-networkd[849]: lo: Gained carrier Sep 13 00:16:02.333349 systemd-networkd[849]: Enumeration completed Sep 13 00:16:02.333775 systemd-networkd[849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:16:02.333779 systemd-networkd[849]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:16:02.334033 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:16:02.335862 systemd-networkd[849]: eth0: Link UP Sep 13 00:16:02.336090 systemd-networkd[849]: eth0: Gained carrier Sep 13 00:16:02.336103 systemd-networkd[849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:16:02.336881 systemd[1]: Reached target network.target - Network. Sep 13 00:16:02.358036 systemd-networkd[849]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:16:02.366400 ignition[757]: parsing config with SHA512: 560e6ae35ee72a3203a0b7be2360634e7a62223ea8840ad1bf869be96450ac454abd0240736726a926933cdfbe9bccdcd493c00e403c38dfcef2cf68c6e17dc4 Sep 13 00:16:02.375098 unknown[757]: fetched base config from "system" Sep 13 00:16:02.375113 unknown[757]: fetched user config from "qemu" Sep 13 00:16:02.375705 ignition[757]: fetch-offline: fetch-offline passed Sep 13 00:16:02.375799 ignition[757]: Ignition finished successfully Sep 13 00:16:02.379678 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:16:02.384065 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:16:02.386860 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:16:02.444210 ignition[856]: Ignition 2.21.0 Sep 13 00:16:02.444226 ignition[856]: Stage: kargs Sep 13 00:16:02.444398 ignition[856]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:16:02.444411 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:16:02.447388 ignition[856]: kargs: kargs passed Sep 13 00:16:02.447526 ignition[856]: Ignition finished successfully Sep 13 00:16:02.454079 systemd-resolved[255]: Detected conflict on linux IN A 10.0.0.20 Sep 13 00:16:02.454096 systemd-resolved[255]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Sep 13 00:16:02.458425 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:16:02.461356 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:16:02.541094 ignition[864]: Ignition 2.21.0 Sep 13 00:16:02.541110 ignition[864]: Stage: disks Sep 13 00:16:02.541313 ignition[864]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:16:02.541329 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:16:02.548354 ignition[864]: disks: disks passed Sep 13 00:16:02.548454 ignition[864]: Ignition finished successfully Sep 13 00:16:02.553688 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:16:02.555211 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:16:02.557469 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:16:02.559033 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:16:02.561430 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:16:02.564023 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:16:02.567461 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:16:02.606168 systemd-fsck[874]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 13 00:16:02.617827 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:16:02.622527 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:16:03.215984 kernel: EXT4-fs (vda9): mounted filesystem 7f859ed0-e8c8-40c1-91d3-e1e964d8c4e8 r/w with ordered data mode. Quota mode: none. Sep 13 00:16:03.217370 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:16:03.220021 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:16:03.224086 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:16:03.226977 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:16:03.229118 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:16:03.229170 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:16:03.229197 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:16:03.250387 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:16:03.254716 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:16:03.258049 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (882) Sep 13 00:16:03.260938 kernel: BTRFS info (device vda6): first mount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 00:16:03.260990 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:16:03.264956 kernel: BTRFS info (device vda6): turning on async discard Sep 13 00:16:03.264980 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 00:16:03.266940 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:16:03.368070 initrd-setup-root[906]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:16:03.374010 initrd-setup-root[913]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:16:03.380151 initrd-setup-root[920]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:16:03.386320 initrd-setup-root[927]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:16:03.580263 systemd-networkd[849]: eth0: Gained IPv6LL Sep 13 00:16:03.630240 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:16:03.632239 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:16:03.634270 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:16:03.661835 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:16:03.663928 kernel: BTRFS info (device vda6): last unmount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 00:16:03.684334 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:16:03.716858 ignition[996]: INFO : Ignition 2.21.0 Sep 13 00:16:03.716858 ignition[996]: INFO : Stage: mount Sep 13 00:16:03.716858 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:16:03.716858 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:16:03.721389 ignition[996]: INFO : mount: mount passed Sep 13 00:16:03.721389 ignition[996]: INFO : Ignition finished successfully Sep 13 00:16:03.725543 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:16:03.728187 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:16:04.219398 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:16:04.244758 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1008) Sep 13 00:16:04.244818 kernel: BTRFS info (device vda6): first mount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 00:16:04.244830 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:16:04.250362 kernel: BTRFS info (device vda6): turning on async discard Sep 13 00:16:04.250450 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 00:16:04.252543 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:16:04.296930 ignition[1025]: INFO : Ignition 2.21.0 Sep 13 00:16:04.296930 ignition[1025]: INFO : Stage: files Sep 13 00:16:04.299179 ignition[1025]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:16:04.299179 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:16:04.299179 ignition[1025]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:16:04.303081 ignition[1025]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:16:04.303081 ignition[1025]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:16:04.306438 ignition[1025]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:16:04.306438 ignition[1025]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:16:04.306438 ignition[1025]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:16:04.305219 unknown[1025]: wrote ssh authorized keys file for user: core Sep 13 00:16:04.312402 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:16:04.312402 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 13 00:16:04.363239 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:16:04.535280 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:16:04.535280 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:16:04.539902 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:16:04.539902 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:16:04.539902 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:16:04.539902 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:16:04.539902 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:16:04.539902 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:16:04.539902 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:16:04.555021 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:16:04.555021 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:16:04.555021 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:16:04.555021 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:16:04.555021 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:16:04.555021 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 13 00:16:04.891438 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:16:05.750933 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:16:05.750933 ignition[1025]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:16:05.755835 ignition[1025]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:16:05.758759 ignition[1025]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:16:05.758759 ignition[1025]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:16:05.758759 ignition[1025]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 13 00:16:05.764577 ignition[1025]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:16:05.764577 ignition[1025]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:16:05.764577 ignition[1025]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 13 00:16:05.764577 ignition[1025]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:16:05.786517 ignition[1025]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:16:05.793417 ignition[1025]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:16:05.795272 ignition[1025]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:16:05.795272 ignition[1025]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:16:05.795272 ignition[1025]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:16:05.795272 ignition[1025]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:16:05.795272 ignition[1025]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:16:05.795272 ignition[1025]: INFO : files: files passed Sep 13 00:16:05.795272 ignition[1025]: INFO : Ignition finished successfully Sep 13 00:16:05.799231 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:16:05.806580 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:16:05.809894 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:16:05.834790 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:16:05.835160 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:16:05.836319 initrd-setup-root-after-ignition[1055]: grep: /sysroot/oem/oem-release: No such file or directory Sep 13 00:16:05.843645 initrd-setup-root-after-ignition[1057]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:16:05.843645 initrd-setup-root-after-ignition[1057]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:16:05.848560 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:16:05.852080 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:16:05.853753 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:16:05.858018 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:16:05.941698 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:16:05.941862 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:16:05.943276 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:16:05.945644 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:16:05.949983 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:16:05.951148 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:16:05.982933 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:16:05.984960 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:16:06.010553 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:16:06.012487 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:16:06.013827 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:16:06.016626 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:16:06.016873 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:16:06.018446 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:16:06.018804 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:16:06.019368 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:16:06.019809 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:16:06.020363 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:16:06.020714 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 13 00:16:06.021286 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:16:06.021637 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:16:06.022073 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:16:06.022649 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:16:06.023002 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:16:06.023495 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:16:06.023644 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:16:06.097723 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:16:06.098965 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:16:06.099235 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:16:06.103347 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:16:06.104433 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:16:06.104612 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:16:06.110157 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:16:06.110304 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:16:06.111476 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:16:06.113755 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:16:06.118990 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:16:06.119163 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:16:06.122013 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:16:06.125250 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:16:06.125360 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:16:06.126830 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:16:06.126945 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:16:06.128695 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:16:06.128849 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:16:06.130697 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:16:06.130822 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:16:06.133945 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:16:06.134816 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:16:06.134963 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:16:06.140759 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:16:06.148191 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:16:06.148441 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:16:06.149648 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:16:06.149785 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:16:06.162342 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:16:06.162668 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:16:06.185335 ignition[1082]: INFO : Ignition 2.21.0 Sep 13 00:16:06.185335 ignition[1082]: INFO : Stage: umount Sep 13 00:16:06.188341 ignition[1082]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:16:06.188341 ignition[1082]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:16:06.191352 ignition[1082]: INFO : umount: umount passed Sep 13 00:16:06.191352 ignition[1082]: INFO : Ignition finished successfully Sep 13 00:16:06.190873 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:16:06.195050 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:16:06.195210 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:16:06.197713 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:16:06.197846 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:16:06.201241 systemd[1]: Stopped target network.target - Network. Sep 13 00:16:06.203359 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:16:06.203433 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:16:06.205596 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:16:06.205726 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:16:06.208618 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:16:06.208701 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:16:06.209867 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:16:06.209959 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:16:06.210382 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:16:06.210442 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:16:06.210996 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:16:06.218325 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:16:06.229395 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:16:06.229615 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:16:06.235082 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 13 00:16:06.235550 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:16:06.235627 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:16:06.240673 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 13 00:16:06.243706 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:16:06.243932 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:16:06.254221 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 13 00:16:06.254445 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 13 00:16:06.255933 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:16:06.256006 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:16:06.259438 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:16:06.260406 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:16:06.260473 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:16:06.260844 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:16:06.260895 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:16:06.266937 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:16:06.267044 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:16:06.270762 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:16:06.275523 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:16:06.294078 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:16:06.294302 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:16:06.299202 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:16:06.299309 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:16:06.301216 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:16:06.301263 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:16:06.301630 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:16:06.301683 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:16:06.309034 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:16:06.309146 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:16:06.311570 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:16:06.311649 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:16:06.323070 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:16:06.324562 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 13 00:16:06.324647 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:16:06.331096 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:16:06.331210 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:16:06.335838 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:16:06.335967 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:16:06.341390 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:16:06.341582 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:16:06.345382 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:16:06.345542 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:16:06.348342 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:16:06.352876 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:16:06.391644 systemd[1]: Switching root. Sep 13 00:16:06.439293 systemd-journald[219]: Journal stopped Sep 13 00:16:07.841892 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Sep 13 00:16:07.841991 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:16:07.842012 kernel: SELinux: policy capability open_perms=1 Sep 13 00:16:07.842033 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:16:07.842047 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:16:07.842061 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:16:07.842075 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:16:07.842098 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:16:07.842112 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:16:07.842125 kernel: SELinux: policy capability userspace_initial_context=0 Sep 13 00:16:07.842139 kernel: audit: type=1403 audit(1757722566.852:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:16:07.842154 systemd[1]: Successfully loaded SELinux policy in 57.034ms. Sep 13 00:16:07.842178 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 21.718ms. Sep 13 00:16:07.842194 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 00:16:07.842209 systemd[1]: Detected virtualization kvm. Sep 13 00:16:07.842224 systemd[1]: Detected architecture x86-64. Sep 13 00:16:07.842241 systemd[1]: Detected first boot. Sep 13 00:16:07.842256 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:16:07.842271 zram_generator::config[1129]: No configuration found. Sep 13 00:16:07.842287 kernel: Guest personality initialized and is inactive Sep 13 00:16:07.842300 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 13 00:16:07.842314 kernel: Initialized host personality Sep 13 00:16:07.842328 kernel: NET: Registered PF_VSOCK protocol family Sep 13 00:16:07.842342 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:16:07.842357 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 13 00:16:07.842380 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:16:07.842395 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:16:07.842410 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:16:07.842425 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:16:07.842439 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:16:07.842454 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:16:07.842470 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:16:07.842486 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:16:07.842513 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:16:07.842529 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:16:07.842544 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:16:07.842559 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:16:07.842574 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:16:07.842589 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:16:07.842604 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:16:07.842620 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:16:07.842637 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:16:07.842652 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:16:07.842666 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:16:07.842681 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:16:07.842695 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:16:07.842710 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:16:07.842724 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:16:07.842745 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:16:07.842760 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:16:07.842777 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:16:07.842792 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:16:07.842806 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:16:07.842821 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:16:07.842837 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:16:07.842852 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 13 00:16:07.842866 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:16:07.842886 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:16:07.842901 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:16:07.842933 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:16:07.842947 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:16:07.842968 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:16:07.842982 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:16:07.842998 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:16:07.843013 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:16:07.843028 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:16:07.843043 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:16:07.843058 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:16:07.843077 systemd[1]: Reached target machines.target - Containers. Sep 13 00:16:07.843091 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:16:07.843106 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:16:07.843122 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:16:07.843137 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:16:07.843152 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:16:07.843167 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:16:07.843182 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:16:07.843199 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:16:07.843214 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:16:07.843230 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:16:07.843245 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:16:07.843260 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:16:07.843275 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:16:07.843290 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:16:07.843305 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 00:16:07.843322 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:16:07.843337 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:16:07.843351 kernel: loop: module loaded Sep 13 00:16:07.843366 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:16:07.843387 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:16:07.843403 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 13 00:16:07.843418 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:16:07.843435 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:16:07.843455 systemd[1]: Stopped verity-setup.service. Sep 13 00:16:07.843470 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:16:07.843484 kernel: fuse: init (API version 7.41) Sep 13 00:16:07.843507 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:16:07.843522 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:16:07.843538 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:16:07.843555 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:16:07.843570 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:16:07.843585 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:16:07.843603 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:16:07.843618 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:16:07.843632 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:16:07.843649 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:16:07.843664 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:16:07.843679 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:16:07.843695 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:16:07.843709 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:16:07.843753 systemd-journald[1204]: Collecting audit messages is disabled. Sep 13 00:16:07.843779 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:16:07.843797 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:16:07.843815 systemd-journald[1204]: Journal started Sep 13 00:16:07.843842 systemd-journald[1204]: Runtime Journal (/run/log/journal/4fa8da2dc8c546e3b44c0dde4172668a) is 6M, max 48.5M, 42.4M free. Sep 13 00:16:07.560615 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:16:07.575682 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 00:16:07.576233 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:16:07.846207 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:16:07.847402 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:16:07.847752 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:16:07.849778 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:16:07.850956 kernel: ACPI: bus type drm_connector registered Sep 13 00:16:07.852448 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:16:07.854395 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:16:07.854694 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:16:07.856421 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:16:07.858329 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 13 00:16:07.879197 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:16:07.882552 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:16:07.885120 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:16:07.886386 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:16:07.886422 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:16:07.888805 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 13 00:16:07.896214 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:16:07.898834 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:16:07.900879 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:16:07.904125 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:16:07.905550 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:16:07.906999 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:16:07.908508 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:16:07.911047 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:16:07.914197 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:16:07.922665 systemd-journald[1204]: Time spent on flushing to /var/log/journal/4fa8da2dc8c546e3b44c0dde4172668a is 34.436ms for 1066 entries. Sep 13 00:16:07.922665 systemd-journald[1204]: System Journal (/var/log/journal/4fa8da2dc8c546e3b44c0dde4172668a) is 8M, max 195.6M, 187.6M free. Sep 13 00:16:07.979747 systemd-journald[1204]: Received client request to flush runtime journal. Sep 13 00:16:07.979820 kernel: loop0: detected capacity change from 0 to 224512 Sep 13 00:16:07.918067 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:16:07.921218 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:16:07.924178 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:16:07.954251 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:16:07.956113 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:16:07.959144 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 13 00:16:07.970469 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:16:07.981587 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:16:07.985993 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:16:07.996022 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 13 00:16:07.999996 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:16:08.012567 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:16:08.016622 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:16:08.030019 kernel: loop1: detected capacity change from 0 to 113872 Sep 13 00:16:08.068153 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Sep 13 00:16:08.068178 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Sep 13 00:16:08.073995 kernel: loop2: detected capacity change from 0 to 146240 Sep 13 00:16:08.075244 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:16:08.115993 kernel: loop3: detected capacity change from 0 to 224512 Sep 13 00:16:08.128949 kernel: loop4: detected capacity change from 0 to 113872 Sep 13 00:16:08.139971 kernel: loop5: detected capacity change from 0 to 146240 Sep 13 00:16:08.158852 (sd-merge)[1270]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 13 00:16:08.159629 (sd-merge)[1270]: Merged extensions into '/usr'. Sep 13 00:16:08.345304 systemd[1]: Reload requested from client PID 1248 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:16:08.345325 systemd[1]: Reloading... Sep 13 00:16:08.458305 zram_generator::config[1299]: No configuration found. Sep 13 00:16:08.592732 ldconfig[1243]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:16:08.617285 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:16:08.713031 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:16:08.713136 systemd[1]: Reloading finished in 366 ms. Sep 13 00:16:08.736700 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:16:08.739129 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:16:08.755930 systemd[1]: Starting ensure-sysext.service... Sep 13 00:16:08.758444 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:16:08.779666 systemd[1]: Reload requested from client PID 1333 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:16:08.779691 systemd[1]: Reloading... Sep 13 00:16:08.811193 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 13 00:16:08.811357 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 13 00:16:08.811782 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:16:08.812118 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:16:08.813228 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:16:08.813566 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Sep 13 00:16:08.813653 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Sep 13 00:16:08.821943 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:16:08.821961 systemd-tmpfiles[1334]: Skipping /boot Sep 13 00:16:08.843762 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:16:08.844352 systemd-tmpfiles[1334]: Skipping /boot Sep 13 00:16:08.863972 zram_generator::config[1364]: No configuration found. Sep 13 00:16:08.967578 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:16:09.115346 systemd[1]: Reloading finished in 335 ms. Sep 13 00:16:09.137106 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:16:09.159838 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:16:09.170302 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 00:16:09.173949 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:16:09.175571 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:16:09.184466 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:16:09.189277 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:16:09.193780 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:16:09.199121 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:16:09.201451 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:16:09.204334 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:16:09.209737 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:16:09.214154 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:16:09.215660 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:16:09.215802 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 00:16:09.219404 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:16:09.220743 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:16:09.222043 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:16:09.229774 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:16:09.230107 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:16:09.230400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:16:09.230561 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 00:16:09.235128 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:16:09.236529 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:16:09.240463 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:16:09.240744 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:16:09.243257 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:16:09.244781 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:16:09.244947 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 00:16:09.245138 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:16:09.255961 systemd[1]: Finished ensure-sysext.service. Sep 13 00:16:09.258746 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:16:09.263955 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:16:09.265287 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:16:09.313079 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:16:09.314987 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:16:09.318561 augenrules[1436]: No rules Sep 13 00:16:09.318475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:16:09.318786 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:16:09.320603 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:16:09.321007 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 00:16:09.322529 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:16:09.322845 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:16:09.324583 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:16:09.324844 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:16:09.326932 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:16:09.336380 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:16:09.336621 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:16:09.341115 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:16:09.350810 systemd-udevd[1404]: Using default interface naming scheme 'v255'. Sep 13 00:16:09.366963 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:16:09.372996 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:16:09.382250 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:16:09.597726 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:16:09.751433 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 13 00:16:09.751520 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 13 00:16:09.751773 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:16:09.753945 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:16:09.761868 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:16:09.763946 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:16:09.855242 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:16:09.858219 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:16:09.920971 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:16:09.946858 systemd-networkd[1459]: lo: Link UP Sep 13 00:16:09.946869 systemd-networkd[1459]: lo: Gained carrier Sep 13 00:16:09.949974 systemd-networkd[1459]: Enumeration completed Sep 13 00:16:09.950448 systemd-networkd[1459]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:16:09.950456 systemd-networkd[1459]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:16:09.950887 systemd-resolved[1403]: Positive Trust Anchors: Sep 13 00:16:09.950899 systemd-resolved[1403]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:16:09.950949 systemd-resolved[1403]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:16:09.951305 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:16:09.953055 systemd-networkd[1459]: eth0: Link UP Sep 13 00:16:09.953342 systemd-networkd[1459]: eth0: Gained carrier Sep 13 00:16:09.953421 systemd-networkd[1459]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:16:09.955487 systemd-resolved[1403]: Defaulting to hostname 'linux'. Sep 13 00:16:09.957282 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 13 00:16:09.963967 systemd-networkd[1459]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:16:09.965100 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:16:09.966989 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:16:09.972875 kernel: kvm_amd: TSC scaling supported Sep 13 00:16:09.972960 kernel: kvm_amd: Nested Virtualization enabled Sep 13 00:16:09.972980 kernel: kvm_amd: Nested Paging enabled Sep 13 00:16:09.972998 kernel: kvm_amd: LBR virtualization supported Sep 13 00:16:09.974187 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 13 00:16:09.974214 kernel: kvm_amd: Virtual GIF supported Sep 13 00:16:09.977766 systemd[1]: Reached target network.target - Network. Sep 13 00:16:09.980099 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:16:09.985264 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:16:10.004336 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:16:10.005825 systemd-timesyncd[1421]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:16:10.005884 systemd-timesyncd[1421]: Initial clock synchronization to Sat 2025-09-13 00:16:10.187715 UTC. Sep 13 00:16:10.008085 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:16:10.023855 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 13 00:16:10.064956 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:16:10.077121 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:16:10.079009 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:16:10.080321 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:16:10.081741 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:16:10.083105 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 13 00:16:10.084584 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:16:10.085840 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:16:10.087220 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:16:10.088659 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:16:10.088702 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:16:10.089740 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:16:10.092369 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:16:10.095799 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:16:10.100675 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 13 00:16:10.102291 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 13 00:16:10.103693 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 13 00:16:10.107779 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:16:10.109464 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 13 00:16:10.111489 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:16:10.113565 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:16:10.114627 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:16:10.115683 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:16:10.115720 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:16:10.116873 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:16:10.119203 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:16:10.121379 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:16:10.123712 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:16:10.127058 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:16:10.129022 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:16:10.131124 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 13 00:16:10.134292 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:16:10.134388 jq[1528]: false Sep 13 00:16:10.136772 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:16:10.141106 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:16:10.146499 extend-filesystems[1529]: Found /dev/vda6 Sep 13 00:16:10.150208 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:16:10.150610 extend-filesystems[1529]: Found /dev/vda9 Sep 13 00:16:10.152964 extend-filesystems[1529]: Checking size of /dev/vda9 Sep 13 00:16:10.159791 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:16:10.163167 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:16:10.165339 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:16:10.166813 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:16:10.170238 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Refreshing passwd entry cache Sep 13 00:16:10.170322 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:16:10.170410 oslogin_cache_refresh[1530]: Refreshing passwd entry cache Sep 13 00:16:10.173026 extend-filesystems[1529]: Resized partition /dev/vda9 Sep 13 00:16:10.174650 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:16:10.177628 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:16:10.180713 extend-filesystems[1553]: resize2fs 1.47.2 (1-Jan-2025) Sep 13 00:16:10.182623 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:16:10.183405 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:16:10.183776 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:16:10.184526 jq[1550]: true Sep 13 00:16:10.186695 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:16:10.187708 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:16:10.189966 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Failure getting users, quitting Sep 13 00:16:10.189966 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 00:16:10.189966 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Refreshing group entry cache Sep 13 00:16:10.189065 oslogin_cache_refresh[1530]: Failure getting users, quitting Sep 13 00:16:10.189090 oslogin_cache_refresh[1530]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 00:16:10.189162 oslogin_cache_refresh[1530]: Refreshing group entry cache Sep 13 00:16:10.190942 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:16:10.201519 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Failure getting groups, quitting Sep 13 00:16:10.201519 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 00:16:10.201507 oslogin_cache_refresh[1530]: Failure getting groups, quitting Sep 13 00:16:10.201532 oslogin_cache_refresh[1530]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 00:16:10.206562 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 13 00:16:10.206979 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 13 00:16:10.211333 (ntainerd)[1557]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:16:10.216938 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:16:10.222811 jq[1556]: true Sep 13 00:16:10.238889 update_engine[1548]: I20250913 00:16:10.234184 1548 main.cc:92] Flatcar Update Engine starting Sep 13 00:16:10.240169 extend-filesystems[1553]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:16:10.240169 extend-filesystems[1553]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:16:10.240169 extend-filesystems[1553]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:16:10.244292 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:16:10.246035 extend-filesystems[1529]: Resized filesystem in /dev/vda9 Sep 13 00:16:10.244759 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:16:10.264123 tar[1554]: linux-amd64/LICENSE Sep 13 00:16:10.265185 tar[1554]: linux-amd64/helm Sep 13 00:16:10.282083 systemd-logind[1543]: Watching system buttons on /dev/input/event2 (Power Button) Sep 13 00:16:10.282117 systemd-logind[1543]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:16:10.282807 systemd-logind[1543]: New seat seat0. Sep 13 00:16:10.286077 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:16:10.296814 dbus-daemon[1526]: [system] SELinux support is enabled Sep 13 00:16:10.297338 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:16:10.304763 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:16:10.304810 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:16:10.306879 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:16:10.306922 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:16:10.317321 update_engine[1548]: I20250913 00:16:10.317260 1548 update_check_scheduler.cc:74] Next update check in 4m18s Sep 13 00:16:10.319121 dbus-daemon[1526]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:16:10.319214 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:16:10.322246 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:16:10.327479 bash[1590]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:16:10.347376 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:16:10.350330 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 00:16:10.429796 sshd_keygen[1566]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:16:10.467054 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:16:10.473103 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:16:10.566646 locksmithd[1591]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:16:10.570967 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:16:10.571287 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:16:10.576368 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:16:10.605395 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:16:10.613930 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:16:10.618147 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:16:10.620605 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:16:10.691117 containerd[1557]: time="2025-09-13T00:16:10Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 13 00:16:10.691864 containerd[1557]: time="2025-09-13T00:16:10.691811614Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 13 00:16:10.701780 containerd[1557]: time="2025-09-13T00:16:10.701736320Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.594µs" Sep 13 00:16:10.701780 containerd[1557]: time="2025-09-13T00:16:10.701769923Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 13 00:16:10.701861 containerd[1557]: time="2025-09-13T00:16:10.701803516Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 13 00:16:10.702065 containerd[1557]: time="2025-09-13T00:16:10.702038146Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 13 00:16:10.702065 containerd[1557]: time="2025-09-13T00:16:10.702061670Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 13 00:16:10.702117 containerd[1557]: time="2025-09-13T00:16:10.702094141Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 00:16:10.702191 containerd[1557]: time="2025-09-13T00:16:10.702170665Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 00:16:10.702191 containerd[1557]: time="2025-09-13T00:16:10.702185542Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 00:16:10.702529 containerd[1557]: time="2025-09-13T00:16:10.702495654Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 00:16:10.702529 containerd[1557]: time="2025-09-13T00:16:10.702514580Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 00:16:10.702529 containerd[1557]: time="2025-09-13T00:16:10.702528065Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 00:16:10.702598 containerd[1557]: time="2025-09-13T00:16:10.702535960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 13 00:16:10.702682 containerd[1557]: time="2025-09-13T00:16:10.702655654Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 13 00:16:10.702983 containerd[1557]: time="2025-09-13T00:16:10.702957030Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 00:16:10.703020 containerd[1557]: time="2025-09-13T00:16:10.702995993Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 00:16:10.703020 containerd[1557]: time="2025-09-13T00:16:10.703006512Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 13 00:16:10.703071 containerd[1557]: time="2025-09-13T00:16:10.703041748Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 13 00:16:10.703304 containerd[1557]: time="2025-09-13T00:16:10.703279394Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 13 00:16:10.703379 containerd[1557]: time="2025-09-13T00:16:10.703361338Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:16:10.710943 containerd[1557]: time="2025-09-13T00:16:10.710224020Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 13 00:16:10.710943 containerd[1557]: time="2025-09-13T00:16:10.710289583Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 13 00:16:10.710943 containerd[1557]: time="2025-09-13T00:16:10.710320251Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 13 00:16:10.710943 containerd[1557]: time="2025-09-13T00:16:10.710335319Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 13 00:16:10.710943 containerd[1557]: time="2025-09-13T00:16:10.710349636Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 13 00:16:10.710943 containerd[1557]: time="2025-09-13T00:16:10.710361999Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 13 00:16:10.710943 containerd[1557]: time="2025-09-13T00:16:10.710375935Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 13 00:16:10.710943 containerd[1557]: time="2025-09-13T00:16:10.710388529Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 13 00:16:10.710943 containerd[1557]: time="2025-09-13T00:16:10.710422262Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 13 00:16:10.710943 containerd[1557]: time="2025-09-13T00:16:10.710437070Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 13 00:16:10.710943 containerd[1557]: time="2025-09-13T00:16:10.710448021Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 13 00:16:10.710943 containerd[1557]: time="2025-09-13T00:16:10.710462137Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 13 00:16:10.710943 containerd[1557]: time="2025-09-13T00:16:10.710632296Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 13 00:16:10.710943 containerd[1557]: time="2025-09-13T00:16:10.710656331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 13 00:16:10.711268 containerd[1557]: time="2025-09-13T00:16:10.710673624Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 13 00:16:10.711268 containerd[1557]: time="2025-09-13T00:16:10.710685486Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 13 00:16:10.711268 containerd[1557]: time="2025-09-13T00:16:10.710698861Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 13 00:16:10.711268 containerd[1557]: time="2025-09-13T00:16:10.710714651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 13 00:16:10.711268 containerd[1557]: time="2025-09-13T00:16:10.710727415Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 13 00:16:10.711268 containerd[1557]: time="2025-09-13T00:16:10.710738986Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 13 00:16:10.711268 containerd[1557]: time="2025-09-13T00:16:10.710751119Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 13 00:16:10.711268 containerd[1557]: time="2025-09-13T00:16:10.710765205Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 13 00:16:10.711268 containerd[1557]: time="2025-09-13T00:16:10.710776697Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 13 00:16:10.711268 containerd[1557]: time="2025-09-13T00:16:10.710877987Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 13 00:16:10.711268 containerd[1557]: time="2025-09-13T00:16:10.710898636Z" level=info msg="Start snapshots syncer" Sep 13 00:16:10.711597 containerd[1557]: time="2025-09-13T00:16:10.711546361Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 13 00:16:10.712701 containerd[1557]: time="2025-09-13T00:16:10.712533512Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 13 00:16:10.712701 containerd[1557]: time="2025-09-13T00:16:10.712619133Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 13 00:16:10.715690 containerd[1557]: time="2025-09-13T00:16:10.712899258Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 13 00:16:10.715690 containerd[1557]: time="2025-09-13T00:16:10.713171880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 13 00:16:10.715690 containerd[1557]: time="2025-09-13T00:16:10.713223827Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 13 00:16:10.715690 containerd[1557]: time="2025-09-13T00:16:10.713243955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 13 00:16:10.715690 containerd[1557]: time="2025-09-13T00:16:10.713260095Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 13 00:16:10.715690 containerd[1557]: time="2025-09-13T00:16:10.713276326Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 13 00:16:10.715690 containerd[1557]: time="2025-09-13T00:16:10.713292917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 13 00:16:10.715690 containerd[1557]: time="2025-09-13T00:16:10.713309768Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 13 00:16:10.715690 containerd[1557]: time="2025-09-13T00:16:10.713366405Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 13 00:16:10.715690 containerd[1557]: time="2025-09-13T00:16:10.713386713Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 13 00:16:10.715690 containerd[1557]: time="2025-09-13T00:16:10.713405007Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 13 00:16:10.715690 containerd[1557]: time="2025-09-13T00:16:10.713478415Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 00:16:10.715690 containerd[1557]: time="2025-09-13T00:16:10.713501087Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 00:16:10.715690 containerd[1557]: time="2025-09-13T00:16:10.713515364Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 00:16:10.716116 containerd[1557]: time="2025-09-13T00:16:10.713526936Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 00:16:10.716116 containerd[1557]: time="2025-09-13T00:16:10.713540070Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 13 00:16:10.716116 containerd[1557]: time="2025-09-13T00:16:10.713555449Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 13 00:16:10.716116 containerd[1557]: time="2025-09-13T00:16:10.713596015Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 13 00:16:10.716116 containerd[1557]: time="2025-09-13T00:16:10.713637062Z" level=info msg="runtime interface created" Sep 13 00:16:10.716116 containerd[1557]: time="2025-09-13T00:16:10.713645057Z" level=info msg="created NRI interface" Sep 13 00:16:10.716116 containerd[1557]: time="2025-09-13T00:16:10.713659645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 13 00:16:10.716116 containerd[1557]: time="2025-09-13T00:16:10.713682888Z" level=info msg="Connect containerd service" Sep 13 00:16:10.716116 containerd[1557]: time="2025-09-13T00:16:10.713746568Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:16:10.716399 containerd[1557]: time="2025-09-13T00:16:10.716129618Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:16:11.039019 containerd[1557]: time="2025-09-13T00:16:11.038089535Z" level=info msg="Start subscribing containerd event" Sep 13 00:16:11.039019 containerd[1557]: time="2025-09-13T00:16:11.038259908Z" level=info msg="Start recovering state" Sep 13 00:16:11.039019 containerd[1557]: time="2025-09-13T00:16:11.038567525Z" level=info msg="Start event monitor" Sep 13 00:16:11.039019 containerd[1557]: time="2025-09-13T00:16:11.038606493Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:16:11.039019 containerd[1557]: time="2025-09-13T00:16:11.038620306Z" level=info msg="Start streaming server" Sep 13 00:16:11.039019 containerd[1557]: time="2025-09-13T00:16:11.038646158Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 13 00:16:11.039019 containerd[1557]: time="2025-09-13T00:16:11.038658567Z" level=info msg="runtime interface starting up..." Sep 13 00:16:11.039019 containerd[1557]: time="2025-09-13T00:16:11.038669736Z" level=info msg="starting plugins..." Sep 13 00:16:11.039019 containerd[1557]: time="2025-09-13T00:16:11.038689870Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 13 00:16:11.039019 containerd[1557]: time="2025-09-13T00:16:11.038714001Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:16:11.039019 containerd[1557]: time="2025-09-13T00:16:11.038782695Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:16:11.039493 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:16:11.040465 containerd[1557]: time="2025-09-13T00:16:11.040427521Z" level=info msg="containerd successfully booted in 0.350075s" Sep 13 00:16:11.113785 tar[1554]: linux-amd64/README.md Sep 13 00:16:11.141566 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:16:11.580896 systemd-networkd[1459]: eth0: Gained IPv6LL Sep 13 00:16:11.588869 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:16:11.600778 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:16:11.611137 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 13 00:16:11.634242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:11.645779 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:16:11.718814 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 00:16:11.719275 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 13 00:16:11.725648 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:16:11.733422 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:16:13.390519 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:16:13.394252 systemd[1]: Started sshd@0-10.0.0.20:22-10.0.0.1:55236.service - OpenSSH per-connection server daemon (10.0.0.1:55236). Sep 13 00:16:13.554928 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 55236 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:16:13.559483 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:13.575480 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:16:13.580318 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:16:13.609021 systemd-logind[1543]: New session 1 of user core. Sep 13 00:16:13.635341 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:16:13.643168 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:16:13.676163 (systemd)[1663]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:16:13.696587 systemd-logind[1543]: New session c1 of user core. Sep 13 00:16:14.001662 systemd[1663]: Queued start job for default target default.target. Sep 13 00:16:14.029647 systemd[1663]: Created slice app.slice - User Application Slice. Sep 13 00:16:14.029687 systemd[1663]: Reached target paths.target - Paths. Sep 13 00:16:14.029747 systemd[1663]: Reached target timers.target - Timers. Sep 13 00:16:14.032200 systemd[1663]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:16:14.058658 systemd[1663]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:16:14.058833 systemd[1663]: Reached target sockets.target - Sockets. Sep 13 00:16:14.058892 systemd[1663]: Reached target basic.target - Basic System. Sep 13 00:16:14.058972 systemd[1663]: Reached target default.target - Main User Target. Sep 13 00:16:14.059025 systemd[1663]: Startup finished in 287ms. Sep 13 00:16:14.059507 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:16:14.076446 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:16:14.176992 systemd[1]: Started sshd@1-10.0.0.20:22-10.0.0.1:55250.service - OpenSSH per-connection server daemon (10.0.0.1:55250). Sep 13 00:16:14.278557 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 55250 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:16:14.281354 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:14.294390 systemd-logind[1543]: New session 2 of user core. Sep 13 00:16:14.310626 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:16:14.420320 sshd[1676]: Connection closed by 10.0.0.1 port 55250 Sep 13 00:16:14.423320 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:14.427425 systemd[1]: Started sshd@2-10.0.0.20:22-10.0.0.1:55256.service - OpenSSH per-connection server daemon (10.0.0.1:55256). Sep 13 00:16:14.431862 systemd[1]: sshd@1-10.0.0.20:22-10.0.0.1:55250.service: Deactivated successfully. Sep 13 00:16:14.434573 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:16:14.435561 systemd-logind[1543]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:16:14.440429 systemd-logind[1543]: Removed session 2. Sep 13 00:16:14.483654 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 55256 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:16:14.485535 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:14.492019 systemd-logind[1543]: New session 3 of user core. Sep 13 00:16:14.508075 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:16:14.539783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:14.541579 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:16:14.544022 systemd[1]: Startup finished in 6.685s (kernel) + 8.176s (initrd) + 7.747s (userspace) = 22.610s. Sep 13 00:16:14.561480 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:16:14.566947 sshd[1686]: Connection closed by 10.0.0.1 port 55256 Sep 13 00:16:14.567417 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:14.576067 systemd[1]: sshd@2-10.0.0.20:22-10.0.0.1:55256.service: Deactivated successfully. Sep 13 00:16:14.578549 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:16:14.581943 systemd-logind[1543]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:16:14.584632 systemd-logind[1543]: Removed session 3. Sep 13 00:16:15.466829 kubelet[1690]: E0913 00:16:15.466704 1690 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:16:15.473896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:16:15.474249 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:16:15.474999 systemd[1]: kubelet.service: Consumed 2.843s CPU time, 265.3M memory peak. Sep 13 00:16:24.663633 systemd[1]: Started sshd@3-10.0.0.20:22-10.0.0.1:52054.service - OpenSSH per-connection server daemon (10.0.0.1:52054). Sep 13 00:16:24.723805 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 52054 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:16:24.725467 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:24.730483 systemd-logind[1543]: New session 4 of user core. Sep 13 00:16:24.740053 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:16:24.796985 sshd[1709]: Connection closed by 10.0.0.1 port 52054 Sep 13 00:16:24.797378 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:24.813346 systemd[1]: sshd@3-10.0.0.20:22-10.0.0.1:52054.service: Deactivated successfully. Sep 13 00:16:24.815598 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:16:24.816408 systemd-logind[1543]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:16:24.819512 systemd[1]: Started sshd@4-10.0.0.20:22-10.0.0.1:52062.service - OpenSSH per-connection server daemon (10.0.0.1:52062). Sep 13 00:16:24.820154 systemd-logind[1543]: Removed session 4. Sep 13 00:16:24.874133 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 52062 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:16:24.875984 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:24.881016 systemd-logind[1543]: New session 5 of user core. Sep 13 00:16:24.891058 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:16:24.941945 sshd[1717]: Connection closed by 10.0.0.1 port 52062 Sep 13 00:16:24.942679 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:24.952863 systemd[1]: sshd@4-10.0.0.20:22-10.0.0.1:52062.service: Deactivated successfully. Sep 13 00:16:24.955367 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:16:24.956159 systemd-logind[1543]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:16:24.959910 systemd[1]: Started sshd@5-10.0.0.20:22-10.0.0.1:52068.service - OpenSSH per-connection server daemon (10.0.0.1:52068). Sep 13 00:16:24.960739 systemd-logind[1543]: Removed session 5. Sep 13 00:16:25.027735 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 52068 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:16:25.029586 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:25.034743 systemd-logind[1543]: New session 6 of user core. Sep 13 00:16:25.051066 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:16:25.107550 sshd[1725]: Connection closed by 10.0.0.1 port 52068 Sep 13 00:16:25.107957 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:25.117090 systemd[1]: sshd@5-10.0.0.20:22-10.0.0.1:52068.service: Deactivated successfully. Sep 13 00:16:25.119196 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:16:25.119990 systemd-logind[1543]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:16:25.123698 systemd[1]: Started sshd@6-10.0.0.20:22-10.0.0.1:52076.service - OpenSSH per-connection server daemon (10.0.0.1:52076). Sep 13 00:16:25.124487 systemd-logind[1543]: Removed session 6. Sep 13 00:16:25.189724 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 52076 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:16:25.191485 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:25.197252 systemd-logind[1543]: New session 7 of user core. Sep 13 00:16:25.211088 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:16:25.274948 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:16:25.275435 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:16:25.298178 sudo[1734]: pam_unix(sudo:session): session closed for user root Sep 13 00:16:25.300318 sshd[1733]: Connection closed by 10.0.0.1 port 52076 Sep 13 00:16:25.300768 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:25.314305 systemd[1]: sshd@6-10.0.0.20:22-10.0.0.1:52076.service: Deactivated successfully. Sep 13 00:16:25.316262 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:16:25.317193 systemd-logind[1543]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:16:25.320893 systemd[1]: Started sshd@7-10.0.0.20:22-10.0.0.1:52092.service - OpenSSH per-connection server daemon (10.0.0.1:52092). Sep 13 00:16:25.321574 systemd-logind[1543]: Removed session 7. Sep 13 00:16:25.386864 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 52092 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:16:25.388846 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:25.393787 systemd-logind[1543]: New session 8 of user core. Sep 13 00:16:25.403150 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:16:25.458798 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:16:25.459173 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:16:25.465860 sudo[1744]: pam_unix(sudo:session): session closed for user root Sep 13 00:16:25.472993 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 13 00:16:25.473328 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:16:25.474507 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:16:25.476662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:25.495567 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 00:16:25.549711 augenrules[1769]: No rules Sep 13 00:16:25.551591 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:16:25.551902 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 00:16:25.553320 sudo[1743]: pam_unix(sudo:session): session closed for user root Sep 13 00:16:25.555225 sshd[1742]: Connection closed by 10.0.0.1 port 52092 Sep 13 00:16:25.555530 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:25.566658 systemd[1]: sshd@7-10.0.0.20:22-10.0.0.1:52092.service: Deactivated successfully. Sep 13 00:16:25.568523 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:16:25.569392 systemd-logind[1543]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:16:25.572498 systemd[1]: Started sshd@8-10.0.0.20:22-10.0.0.1:52096.service - OpenSSH per-connection server daemon (10.0.0.1:52096). Sep 13 00:16:25.573476 systemd-logind[1543]: Removed session 8. Sep 13 00:16:25.625987 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 52096 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:16:25.627287 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:25.632237 systemd-logind[1543]: New session 9 of user core. Sep 13 00:16:25.647072 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:16:25.705067 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:16:25.705509 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:16:25.807646 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:25.813311 (kubelet)[1796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:16:26.108179 kubelet[1796]: E0913 00:16:26.107991 1796 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:16:26.117469 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:16:26.117702 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:16:26.118186 systemd[1]: kubelet.service: Consumed 801ms CPU time, 114.6M memory peak. Sep 13 00:16:26.832868 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:16:26.849444 (dockerd)[1814]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:16:27.338989 dockerd[1814]: time="2025-09-13T00:16:27.338860303Z" level=info msg="Starting up" Sep 13 00:16:27.341403 dockerd[1814]: time="2025-09-13T00:16:27.341368383Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 13 00:16:27.710736 dockerd[1814]: time="2025-09-13T00:16:27.710656615Z" level=info msg="Loading containers: start." Sep 13 00:16:27.722968 kernel: Initializing XFRM netlink socket Sep 13 00:16:28.056383 systemd-networkd[1459]: docker0: Link UP Sep 13 00:16:28.062414 dockerd[1814]: time="2025-09-13T00:16:28.062164421Z" level=info msg="Loading containers: done." Sep 13 00:16:28.085455 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3109911156-merged.mount: Deactivated successfully. Sep 13 00:16:28.087868 dockerd[1814]: time="2025-09-13T00:16:28.087796128Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:16:28.088019 dockerd[1814]: time="2025-09-13T00:16:28.087986721Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 13 00:16:28.088224 dockerd[1814]: time="2025-09-13T00:16:28.088196445Z" level=info msg="Initializing buildkit" Sep 13 00:16:28.124795 dockerd[1814]: time="2025-09-13T00:16:28.124730197Z" level=info msg="Completed buildkit initialization" Sep 13 00:16:28.130395 dockerd[1814]: time="2025-09-13T00:16:28.130322957Z" level=info msg="Daemon has completed initialization" Sep 13 00:16:28.130543 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:16:28.130865 dockerd[1814]: time="2025-09-13T00:16:28.130803051Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:16:29.129674 containerd[1557]: time="2025-09-13T00:16:29.129600103Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 13 00:16:29.873557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3076145849.mount: Deactivated successfully. Sep 13 00:16:31.042624 containerd[1557]: time="2025-09-13T00:16:31.042523083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:31.043513 containerd[1557]: time="2025-09-13T00:16:31.043306170Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 13 00:16:31.044748 containerd[1557]: time="2025-09-13T00:16:31.044680847Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:31.047855 containerd[1557]: time="2025-09-13T00:16:31.047815093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:31.049186 containerd[1557]: time="2025-09-13T00:16:31.049090607Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.919430544s" Sep 13 00:16:31.049239 containerd[1557]: time="2025-09-13T00:16:31.049187782Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 13 00:16:31.050191 containerd[1557]: time="2025-09-13T00:16:31.050159559Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 13 00:16:32.629201 containerd[1557]: time="2025-09-13T00:16:32.629121450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:32.630091 containerd[1557]: time="2025-09-13T00:16:32.630026993Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 13 00:16:32.631301 containerd[1557]: time="2025-09-13T00:16:32.631254822Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:32.633983 containerd[1557]: time="2025-09-13T00:16:32.633953009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:32.635308 containerd[1557]: time="2025-09-13T00:16:32.635242267Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.585050277s" Sep 13 00:16:32.635308 containerd[1557]: time="2025-09-13T00:16:32.635291076Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 13 00:16:32.636138 containerd[1557]: time="2025-09-13T00:16:32.636099703Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 13 00:16:34.926466 containerd[1557]: time="2025-09-13T00:16:34.923452790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:34.929061 containerd[1557]: time="2025-09-13T00:16:34.928965377Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 13 00:16:34.933299 containerd[1557]: time="2025-09-13T00:16:34.930720676Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:34.949835 containerd[1557]: time="2025-09-13T00:16:34.943751914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:34.951009 containerd[1557]: time="2025-09-13T00:16:34.947121820Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 2.310975789s" Sep 13 00:16:34.951009 containerd[1557]: time="2025-09-13T00:16:34.950085466Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 13 00:16:34.951009 containerd[1557]: time="2025-09-13T00:16:34.950577837Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 13 00:16:36.208881 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:16:36.261859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:36.579551 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:36.592506 (kubelet)[2104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:16:36.779264 kubelet[2104]: E0913 00:16:36.779043 2104 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:16:36.790542 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:16:36.790795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:16:36.795119 systemd[1]: kubelet.service: Consumed 369ms CPU time, 111M memory peak. Sep 13 00:16:37.425134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3412715423.mount: Deactivated successfully. Sep 13 00:16:39.212785 containerd[1557]: time="2025-09-13T00:16:39.212630876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:39.271404 containerd[1557]: time="2025-09-13T00:16:39.271356252Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 13 00:16:39.343726 containerd[1557]: time="2025-09-13T00:16:39.343685529Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:39.420071 containerd[1557]: time="2025-09-13T00:16:39.420038435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:39.420724 containerd[1557]: time="2025-09-13T00:16:39.420673620Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 4.470054616s" Sep 13 00:16:39.420764 containerd[1557]: time="2025-09-13T00:16:39.420724813Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 13 00:16:39.421394 containerd[1557]: time="2025-09-13T00:16:39.421279924Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:16:40.681588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount193712042.mount: Deactivated successfully. Sep 13 00:16:41.760313 containerd[1557]: time="2025-09-13T00:16:41.760218959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:41.765143 containerd[1557]: time="2025-09-13T00:16:41.765044443Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 13 00:16:41.782141 containerd[1557]: time="2025-09-13T00:16:41.782043705Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:41.805560 containerd[1557]: time="2025-09-13T00:16:41.805444900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:41.806617 containerd[1557]: time="2025-09-13T00:16:41.806570276Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.385242337s" Sep 13 00:16:41.806617 containerd[1557]: time="2025-09-13T00:16:41.806603923Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:16:41.807287 containerd[1557]: time="2025-09-13T00:16:41.807240651Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:16:42.401187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2274452528.mount: Deactivated successfully. Sep 13 00:16:42.408435 containerd[1557]: time="2025-09-13T00:16:42.408381812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:16:42.409205 containerd[1557]: time="2025-09-13T00:16:42.409146666Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 00:16:42.410420 containerd[1557]: time="2025-09-13T00:16:42.410383967Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:16:42.412428 containerd[1557]: time="2025-09-13T00:16:42.412394633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:16:42.412923 containerd[1557]: time="2025-09-13T00:16:42.412877583Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 605.585034ms" Sep 13 00:16:42.412973 containerd[1557]: time="2025-09-13T00:16:42.412946941Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:16:42.413520 containerd[1557]: time="2025-09-13T00:16:42.413496144Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 13 00:16:44.287698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1177784692.mount: Deactivated successfully. Sep 13 00:16:46.953828 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:16:46.956697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:47.680950 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1003665589 wd_nsec: 1003665399 Sep 13 00:16:47.944176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:47.955515 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:16:48.042699 kubelet[2234]: E0913 00:16:48.042604 2234 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:16:48.048062 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:16:48.048297 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:16:48.048734 systemd[1]: kubelet.service: Consumed 1.040s CPU time, 110.5M memory peak. Sep 13 00:16:48.960292 containerd[1557]: time="2025-09-13T00:16:48.960220442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:48.960928 containerd[1557]: time="2025-09-13T00:16:48.960860591Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 13 00:16:48.962131 containerd[1557]: time="2025-09-13T00:16:48.962094776Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:48.964973 containerd[1557]: time="2025-09-13T00:16:48.964943111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:48.966183 containerd[1557]: time="2025-09-13T00:16:48.966137140Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 6.55260269s" Sep 13 00:16:48.966236 containerd[1557]: time="2025-09-13T00:16:48.966185885Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 13 00:16:51.157761 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:51.158004 systemd[1]: kubelet.service: Consumed 1.040s CPU time, 110.5M memory peak. Sep 13 00:16:51.160483 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:51.187797 systemd[1]: Reload requested from client PID 2274 ('systemctl') (unit session-9.scope)... Sep 13 00:16:51.187813 systemd[1]: Reloading... Sep 13 00:16:51.332055 zram_generator::config[2317]: No configuration found. Sep 13 00:16:51.501262 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:16:51.621339 systemd[1]: Reloading finished in 433 ms. Sep 13 00:16:51.694703 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:16:51.694838 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:16:51.695272 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:51.695330 systemd[1]: kubelet.service: Consumed 173ms CPU time, 98.2M memory peak. Sep 13 00:16:51.697686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:51.906597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:51.924449 (kubelet)[2364]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:16:51.982811 kubelet[2364]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:16:51.982811 kubelet[2364]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:16:51.982811 kubelet[2364]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:16:51.983352 kubelet[2364]: I0913 00:16:51.982879 2364 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:16:52.364864 kubelet[2364]: I0913 00:16:52.364790 2364 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:16:52.364864 kubelet[2364]: I0913 00:16:52.364843 2364 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:16:52.365188 kubelet[2364]: I0913 00:16:52.365158 2364 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:16:52.397063 kubelet[2364]: E0913 00:16:52.396992 2364 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:52.400886 kubelet[2364]: I0913 00:16:52.400849 2364 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:16:52.407871 kubelet[2364]: I0913 00:16:52.407835 2364 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 00:16:52.417466 kubelet[2364]: I0913 00:16:52.417058 2364 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:16:52.419410 kubelet[2364]: I0913 00:16:52.419039 2364 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:16:52.419654 kubelet[2364]: I0913 00:16:52.419386 2364 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:16:52.419889 kubelet[2364]: I0913 00:16:52.419684 2364 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:16:52.419889 kubelet[2364]: I0913 00:16:52.419703 2364 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:16:52.419965 kubelet[2364]: I0913 00:16:52.419932 2364 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:16:52.422806 kubelet[2364]: I0913 00:16:52.422766 2364 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:16:52.422806 kubelet[2364]: I0913 00:16:52.422802 2364 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:16:52.422898 kubelet[2364]: I0913 00:16:52.422845 2364 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:16:52.423285 kubelet[2364]: I0913 00:16:52.422998 2364 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:16:52.426165 kubelet[2364]: W0913 00:16:52.426012 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Sep 13 00:16:52.426165 kubelet[2364]: E0913 00:16:52.426087 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:52.426377 kubelet[2364]: W0913 00:16:52.426071 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Sep 13 00:16:52.426738 kubelet[2364]: E0913 00:16:52.426703 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:52.426786 kubelet[2364]: I0913 00:16:52.426771 2364 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 13 00:16:52.427241 kubelet[2364]: I0913 00:16:52.427217 2364 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:16:52.427789 kubelet[2364]: W0913 00:16:52.427757 2364 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:16:52.430482 kubelet[2364]: I0913 00:16:52.430451 2364 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:16:52.430537 kubelet[2364]: I0913 00:16:52.430497 2364 server.go:1287] "Started kubelet" Sep 13 00:16:52.430749 kubelet[2364]: I0913 00:16:52.430720 2364 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:16:52.431839 kubelet[2364]: I0913 00:16:52.431804 2364 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:16:52.432936 kubelet[2364]: I0913 00:16:52.432671 2364 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:16:52.433038 kubelet[2364]: I0913 00:16:52.433006 2364 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:16:52.434919 kubelet[2364]: I0913 00:16:52.434883 2364 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:16:52.435205 kubelet[2364]: I0913 00:16:52.435184 2364 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:16:52.438163 kubelet[2364]: E0913 00:16:52.438134 2364 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:16:52.438440 kubelet[2364]: E0913 00:16:52.438420 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:52.438479 kubelet[2364]: I0913 00:16:52.438451 2364 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:16:52.438634 kubelet[2364]: I0913 00:16:52.438605 2364 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:16:52.438667 kubelet[2364]: I0913 00:16:52.438653 2364 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:16:52.438998 kubelet[2364]: W0913 00:16:52.438960 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Sep 13 00:16:52.439047 kubelet[2364]: E0913 00:16:52.439002 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:52.440047 kubelet[2364]: E0913 00:16:52.438345 2364 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864af673796fed1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:16:52.430470865 +0000 UTC m=+0.497031482,LastTimestamp:2025-09-13 00:16:52.430470865 +0000 UTC m=+0.497031482,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:16:52.440194 kubelet[2364]: E0913 00:16:52.440050 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="200ms" Sep 13 00:16:52.440342 kubelet[2364]: I0913 00:16:52.440310 2364 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:16:52.440448 kubelet[2364]: I0913 00:16:52.440417 2364 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:16:52.441425 kubelet[2364]: I0913 00:16:52.441406 2364 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:16:52.457953 kubelet[2364]: I0913 00:16:52.457887 2364 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:16:52.459771 kubelet[2364]: I0913 00:16:52.459672 2364 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:16:52.459771 kubelet[2364]: I0913 00:16:52.459714 2364 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:16:52.459771 kubelet[2364]: I0913 00:16:52.459743 2364 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:16:52.459771 kubelet[2364]: I0913 00:16:52.459750 2364 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:16:52.459919 kubelet[2364]: E0913 00:16:52.459806 2364 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:16:52.460604 kubelet[2364]: W0913 00:16:52.460552 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Sep 13 00:16:52.460636 kubelet[2364]: E0913 00:16:52.460604 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:52.464005 kubelet[2364]: I0913 00:16:52.463864 2364 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:16:52.464005 kubelet[2364]: I0913 00:16:52.463998 2364 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:16:52.464090 kubelet[2364]: I0913 00:16:52.464024 2364 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:16:52.539361 kubelet[2364]: E0913 00:16:52.539302 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:52.560640 kubelet[2364]: E0913 00:16:52.560567 2364 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:16:52.640614 kubelet[2364]: E0913 00:16:52.640194 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:52.640818 kubelet[2364]: E0913 00:16:52.640772 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="400ms" Sep 13 00:16:52.741392 kubelet[2364]: E0913 00:16:52.741334 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:52.761954 kubelet[2364]: E0913 00:16:52.761840 2364 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:16:52.842516 kubelet[2364]: E0913 00:16:52.842443 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:52.943465 kubelet[2364]: E0913 00:16:52.943341 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:53.042384 kubelet[2364]: E0913 00:16:53.042307 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="800ms" Sep 13 00:16:53.043456 kubelet[2364]: E0913 00:16:53.043421 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:53.143942 kubelet[2364]: E0913 00:16:53.143897 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:53.162059 kubelet[2364]: E0913 00:16:53.162022 2364 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:16:53.244561 kubelet[2364]: E0913 00:16:53.244532 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:53.266314 kubelet[2364]: W0913 00:16:53.266259 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Sep 13 00:16:53.266363 kubelet[2364]: E0913 00:16:53.266321 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:53.345105 kubelet[2364]: E0913 00:16:53.345053 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:53.445495 kubelet[2364]: E0913 00:16:53.445459 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:53.546553 kubelet[2364]: E0913 00:16:53.546402 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:53.598150 kubelet[2364]: I0913 00:16:53.598113 2364 policy_none.go:49] "None policy: Start" Sep 13 00:16:53.598150 kubelet[2364]: I0913 00:16:53.598142 2364 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:16:53.598255 kubelet[2364]: I0913 00:16:53.598165 2364 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:16:53.606227 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:16:53.620149 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:16:53.623977 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:16:53.643380 kubelet[2364]: I0913 00:16:53.643326 2364 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:16:53.643694 kubelet[2364]: I0913 00:16:53.643671 2364 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:16:53.643737 kubelet[2364]: I0913 00:16:53.643691 2364 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:16:53.644409 kubelet[2364]: I0913 00:16:53.644045 2364 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:16:53.644839 kubelet[2364]: E0913 00:16:53.644802 2364 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:16:53.644883 kubelet[2364]: E0913 00:16:53.644864 2364 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:16:53.735312 kubelet[2364]: W0913 00:16:53.735232 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Sep 13 00:16:53.735312 kubelet[2364]: E0913 00:16:53.735310 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:53.746117 kubelet[2364]: I0913 00:16:53.746068 2364 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:16:53.746627 kubelet[2364]: E0913 00:16:53.746591 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Sep 13 00:16:53.839468 kubelet[2364]: W0913 00:16:53.839340 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Sep 13 00:16:53.839468 kubelet[2364]: E0913 00:16:53.839398 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:53.843505 kubelet[2364]: E0913 00:16:53.843468 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="1.6s" Sep 13 00:16:53.948375 kubelet[2364]: I0913 00:16:53.948261 2364 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:16:53.948798 kubelet[2364]: E0913 00:16:53.948729 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Sep 13 00:16:53.955459 kubelet[2364]: W0913 00:16:53.955405 2364 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused Sep 13 00:16:53.955510 kubelet[2364]: E0913 00:16:53.955466 2364 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:53.973497 systemd[1]: Created slice kubepods-burstable-pod7de1c1e7a9f64db81235d31c1bd9bca7.slice - libcontainer container kubepods-burstable-pod7de1c1e7a9f64db81235d31c1bd9bca7.slice. Sep 13 00:16:53.982053 kubelet[2364]: E0913 00:16:53.982002 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:53.984580 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 13 00:16:53.987091 kubelet[2364]: E0913 00:16:53.987061 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:53.990071 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 13 00:16:53.991907 kubelet[2364]: E0913 00:16:53.991865 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:54.047385 kubelet[2364]: I0913 00:16:54.047329 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:54.047385 kubelet[2364]: I0913 00:16:54.047390 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:54.047890 kubelet[2364]: I0913 00:16:54.047456 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:16:54.047890 kubelet[2364]: I0913 00:16:54.047489 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7de1c1e7a9f64db81235d31c1bd9bca7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7de1c1e7a9f64db81235d31c1bd9bca7\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:54.047890 kubelet[2364]: I0913 00:16:54.047510 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7de1c1e7a9f64db81235d31c1bd9bca7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7de1c1e7a9f64db81235d31c1bd9bca7\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:54.047890 kubelet[2364]: I0913 00:16:54.047562 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:54.047890 kubelet[2364]: I0913 00:16:54.047591 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:54.048044 kubelet[2364]: I0913 00:16:54.047616 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7de1c1e7a9f64db81235d31c1bd9bca7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7de1c1e7a9f64db81235d31c1bd9bca7\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:54.048044 kubelet[2364]: I0913 00:16:54.047659 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:54.283015 kubelet[2364]: E0913 00:16:54.282952 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:54.283974 containerd[1557]: time="2025-09-13T00:16:54.283935011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7de1c1e7a9f64db81235d31c1bd9bca7,Namespace:kube-system,Attempt:0,}" Sep 13 00:16:54.288189 kubelet[2364]: E0913 00:16:54.288149 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:54.288753 containerd[1557]: time="2025-09-13T00:16:54.288707406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 13 00:16:54.293288 kubelet[2364]: E0913 00:16:54.293051 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:54.293603 containerd[1557]: time="2025-09-13T00:16:54.293545707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 13 00:16:54.328067 containerd[1557]: time="2025-09-13T00:16:54.328019979Z" level=info msg="connecting to shim 20cb9676b118bf1195a1adc65c6e13ab0910fe3791435b19bb0bb0741760c5e7" address="unix:///run/containerd/s/35a2ca20a8197b8bc3290293b91cb4984b41facaacde465257900e92f7281958" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:16:54.333660 containerd[1557]: time="2025-09-13T00:16:54.333625804Z" level=info msg="connecting to shim 01d130cfcfd791b67ab139be78ecfca3200b48e265997ca5653a032dac954990" address="unix:///run/containerd/s/610c8bd1a9f46b7b6951c0db6a4bd76c84b9c8d20308c5f941ac5a983e0dc530" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:16:54.353874 kubelet[2364]: I0913 00:16:54.353815 2364 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:16:54.354348 kubelet[2364]: E0913 00:16:54.354277 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" Sep 13 00:16:54.355012 containerd[1557]: time="2025-09-13T00:16:54.354971292Z" level=info msg="connecting to shim b19351647985ab8b8d5aff5b848a9aa676dae2d4021a138ad747b7284eb1587b" address="unix:///run/containerd/s/2a498726624ee2157c877cc2d1cc14ef6de0f892eb76518e4790b587ec723709" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:16:54.432115 systemd[1]: Started cri-containerd-01d130cfcfd791b67ab139be78ecfca3200b48e265997ca5653a032dac954990.scope - libcontainer container 01d130cfcfd791b67ab139be78ecfca3200b48e265997ca5653a032dac954990. Sep 13 00:16:54.438524 systemd[1]: Started cri-containerd-20cb9676b118bf1195a1adc65c6e13ab0910fe3791435b19bb0bb0741760c5e7.scope - libcontainer container 20cb9676b118bf1195a1adc65c6e13ab0910fe3791435b19bb0bb0741760c5e7. Sep 13 00:16:54.454068 systemd[1]: Started cri-containerd-b19351647985ab8b8d5aff5b848a9aa676dae2d4021a138ad747b7284eb1587b.scope - libcontainer container b19351647985ab8b8d5aff5b848a9aa676dae2d4021a138ad747b7284eb1587b. Sep 13 00:16:54.508632 containerd[1557]: time="2025-09-13T00:16:54.508536793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7de1c1e7a9f64db81235d31c1bd9bca7,Namespace:kube-system,Attempt:0,} returns sandbox id \"20cb9676b118bf1195a1adc65c6e13ab0910fe3791435b19bb0bb0741760c5e7\"" Sep 13 00:16:54.509877 kubelet[2364]: E0913 00:16:54.509851 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:54.513891 containerd[1557]: time="2025-09-13T00:16:54.513329441Z" level=info msg="CreateContainer within sandbox \"20cb9676b118bf1195a1adc65c6e13ab0910fe3791435b19bb0bb0741760c5e7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:16:54.514468 containerd[1557]: time="2025-09-13T00:16:54.514407271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"01d130cfcfd791b67ab139be78ecfca3200b48e265997ca5653a032dac954990\"" Sep 13 00:16:54.515063 kubelet[2364]: E0913 00:16:54.515043 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:54.516608 containerd[1557]: time="2025-09-13T00:16:54.516571259Z" level=info msg="CreateContainer within sandbox \"01d130cfcfd791b67ab139be78ecfca3200b48e265997ca5653a032dac954990\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:16:54.518069 containerd[1557]: time="2025-09-13T00:16:54.518040367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"b19351647985ab8b8d5aff5b848a9aa676dae2d4021a138ad747b7284eb1587b\"" Sep 13 00:16:54.518586 kubelet[2364]: E0913 00:16:54.518565 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:54.519340 kubelet[2364]: E0913 00:16:54.519067 2364 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:16:54.520217 containerd[1557]: time="2025-09-13T00:16:54.520179614Z" level=info msg="CreateContainer within sandbox \"b19351647985ab8b8d5aff5b848a9aa676dae2d4021a138ad747b7284eb1587b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:16:54.527824 containerd[1557]: time="2025-09-13T00:16:54.527791457Z" level=info msg="Container 846217323662e35921b33215569373fa2326ed8926c254166c43f36dcedd0f02: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:16:54.531456 containerd[1557]: time="2025-09-13T00:16:54.531411556Z" level=info msg="Container 25ac589ea698ac2d8111a3f1082cebc5a23dc38121a602b7f42d09f68eb3ee2d: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:16:54.534958 containerd[1557]: time="2025-09-13T00:16:54.534858332Z" level=info msg="Container d1c9263a700a518ab13de701dac1456daf87581f998efd9b9e86e02c90b867fb: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:16:54.543012 containerd[1557]: time="2025-09-13T00:16:54.542980691Z" level=info msg="CreateContainer within sandbox \"20cb9676b118bf1195a1adc65c6e13ab0910fe3791435b19bb0bb0741760c5e7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"25ac589ea698ac2d8111a3f1082cebc5a23dc38121a602b7f42d09f68eb3ee2d\"" Sep 13 00:16:54.543583 containerd[1557]: time="2025-09-13T00:16:54.543552726Z" level=info msg="StartContainer for \"25ac589ea698ac2d8111a3f1082cebc5a23dc38121a602b7f42d09f68eb3ee2d\"" Sep 13 00:16:54.544815 containerd[1557]: time="2025-09-13T00:16:54.544647683Z" level=info msg="CreateContainer within sandbox \"01d130cfcfd791b67ab139be78ecfca3200b48e265997ca5653a032dac954990\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"846217323662e35921b33215569373fa2326ed8926c254166c43f36dcedd0f02\"" Sep 13 00:16:54.546564 containerd[1557]: time="2025-09-13T00:16:54.545494471Z" level=info msg="StartContainer for \"846217323662e35921b33215569373fa2326ed8926c254166c43f36dcedd0f02\"" Sep 13 00:16:54.546564 containerd[1557]: time="2025-09-13T00:16:54.545884775Z" level=info msg="connecting to shim 25ac589ea698ac2d8111a3f1082cebc5a23dc38121a602b7f42d09f68eb3ee2d" address="unix:///run/containerd/s/35a2ca20a8197b8bc3290293b91cb4984b41facaacde465257900e92f7281958" protocol=ttrpc version=3 Sep 13 00:16:54.547659 containerd[1557]: time="2025-09-13T00:16:54.547589194Z" level=info msg="connecting to shim 846217323662e35921b33215569373fa2326ed8926c254166c43f36dcedd0f02" address="unix:///run/containerd/s/610c8bd1a9f46b7b6951c0db6a4bd76c84b9c8d20308c5f941ac5a983e0dc530" protocol=ttrpc version=3 Sep 13 00:16:54.548532 containerd[1557]: time="2025-09-13T00:16:54.548492922Z" level=info msg="CreateContainer within sandbox \"b19351647985ab8b8d5aff5b848a9aa676dae2d4021a138ad747b7284eb1587b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d1c9263a700a518ab13de701dac1456daf87581f998efd9b9e86e02c90b867fb\"" Sep 13 00:16:54.549019 containerd[1557]: time="2025-09-13T00:16:54.548965939Z" level=info msg="StartContainer for \"d1c9263a700a518ab13de701dac1456daf87581f998efd9b9e86e02c90b867fb\"" Sep 13 00:16:54.550264 containerd[1557]: time="2025-09-13T00:16:54.550196649Z" level=info msg="connecting to shim d1c9263a700a518ab13de701dac1456daf87581f998efd9b9e86e02c90b867fb" address="unix:///run/containerd/s/2a498726624ee2157c877cc2d1cc14ef6de0f892eb76518e4790b587ec723709" protocol=ttrpc version=3 Sep 13 00:16:54.612099 systemd[1]: Started cri-containerd-d1c9263a700a518ab13de701dac1456daf87581f998efd9b9e86e02c90b867fb.scope - libcontainer container d1c9263a700a518ab13de701dac1456daf87581f998efd9b9e86e02c90b867fb. Sep 13 00:16:54.626342 systemd[1]: Started cri-containerd-25ac589ea698ac2d8111a3f1082cebc5a23dc38121a602b7f42d09f68eb3ee2d.scope - libcontainer container 25ac589ea698ac2d8111a3f1082cebc5a23dc38121a602b7f42d09f68eb3ee2d. Sep 13 00:16:54.628971 systemd[1]: Started cri-containerd-846217323662e35921b33215569373fa2326ed8926c254166c43f36dcedd0f02.scope - libcontainer container 846217323662e35921b33215569373fa2326ed8926c254166c43f36dcedd0f02. Sep 13 00:16:54.695658 containerd[1557]: time="2025-09-13T00:16:54.695490448Z" level=info msg="StartContainer for \"d1c9263a700a518ab13de701dac1456daf87581f998efd9b9e86e02c90b867fb\" returns successfully" Sep 13 00:16:54.720396 containerd[1557]: time="2025-09-13T00:16:54.720323239Z" level=info msg="StartContainer for \"846217323662e35921b33215569373fa2326ed8926c254166c43f36dcedd0f02\" returns successfully" Sep 13 00:16:54.734523 containerd[1557]: time="2025-09-13T00:16:54.734442629Z" level=info msg="StartContainer for \"25ac589ea698ac2d8111a3f1082cebc5a23dc38121a602b7f42d09f68eb3ee2d\" returns successfully" Sep 13 00:16:55.161133 kubelet[2364]: I0913 00:16:55.161088 2364 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:16:55.472225 kubelet[2364]: E0913 00:16:55.472175 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:55.472659 kubelet[2364]: E0913 00:16:55.472308 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:55.475536 kubelet[2364]: E0913 00:16:55.475430 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:55.475901 kubelet[2364]: E0913 00:16:55.475849 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:55.478212 kubelet[2364]: E0913 00:16:55.478063 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:55.478212 kubelet[2364]: E0913 00:16:55.478150 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:55.935179 update_engine[1548]: I20250913 00:16:55.934970 1548 update_attempter.cc:509] Updating boot flags... Sep 13 00:16:56.475777 kubelet[2364]: E0913 00:16:56.475727 2364 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 00:16:56.481406 kubelet[2364]: E0913 00:16:56.481367 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:56.481526 kubelet[2364]: E0913 00:16:56.481500 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:56.481726 kubelet[2364]: E0913 00:16:56.481699 2364 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:56.481790 kubelet[2364]: E0913 00:16:56.481781 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:56.558567 kubelet[2364]: I0913 00:16:56.558226 2364 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:16:56.639778 kubelet[2364]: I0913 00:16:56.639730 2364 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:56.857107 kubelet[2364]: E0913 00:16:56.856904 2364 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:56.857107 kubelet[2364]: I0913 00:16:56.856976 2364 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:56.859219 kubelet[2364]: E0913 00:16:56.859181 2364 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:56.859219 kubelet[2364]: I0913 00:16:56.859226 2364 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:16:56.860855 kubelet[2364]: E0913 00:16:56.860811 2364 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 13 00:16:57.430010 kubelet[2364]: I0913 00:16:57.429960 2364 apiserver.go:52] "Watching apiserver" Sep 13 00:16:57.439189 kubelet[2364]: I0913 00:16:57.439138 2364 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:16:58.355051 systemd[1]: Reload requested from client PID 2659 ('systemctl') (unit session-9.scope)... Sep 13 00:16:58.355069 systemd[1]: Reloading... Sep 13 00:16:58.451956 zram_generator::config[2702]: No configuration found. Sep 13 00:16:58.589327 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:16:58.721117 systemd[1]: Reloading finished in 365 ms. Sep 13 00:16:58.756365 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:58.781810 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:16:58.782208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:58.782275 systemd[1]: kubelet.service: Consumed 1.076s CPU time, 134.1M memory peak. Sep 13 00:16:58.784691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:59.009130 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:59.019298 (kubelet)[2747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:16:59.062363 kubelet[2747]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:16:59.062363 kubelet[2747]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:16:59.062363 kubelet[2747]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:16:59.062804 kubelet[2747]: I0913 00:16:59.062417 2747 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:16:59.073575 kubelet[2747]: I0913 00:16:59.073524 2747 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:16:59.073575 kubelet[2747]: I0913 00:16:59.073555 2747 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:16:59.073842 kubelet[2747]: I0913 00:16:59.073819 2747 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:16:59.075069 kubelet[2747]: I0913 00:16:59.075041 2747 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:16:59.077429 kubelet[2747]: I0913 00:16:59.077400 2747 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:16:59.081376 kubelet[2747]: I0913 00:16:59.081340 2747 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 00:16:59.087044 kubelet[2747]: I0913 00:16:59.086993 2747 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:16:59.087314 kubelet[2747]: I0913 00:16:59.087266 2747 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:16:59.087514 kubelet[2747]: I0913 00:16:59.087302 2747 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:16:59.087592 kubelet[2747]: I0913 00:16:59.087520 2747 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:16:59.087592 kubelet[2747]: I0913 00:16:59.087531 2747 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:16:59.087592 kubelet[2747]: I0913 00:16:59.087591 2747 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:16:59.087796 kubelet[2747]: I0913 00:16:59.087768 2747 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:16:59.087830 kubelet[2747]: I0913 00:16:59.087800 2747 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:16:59.087830 kubelet[2747]: I0913 00:16:59.087825 2747 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:16:59.087877 kubelet[2747]: I0913 00:16:59.087837 2747 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:16:59.091260 kubelet[2747]: I0913 00:16:59.091230 2747 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 13 00:16:59.091717 kubelet[2747]: I0913 00:16:59.091685 2747 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:16:59.092327 kubelet[2747]: I0913 00:16:59.092298 2747 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:16:59.092371 kubelet[2747]: I0913 00:16:59.092337 2747 server.go:1287] "Started kubelet" Sep 13 00:16:59.095064 kubelet[2747]: I0913 00:16:59.095040 2747 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:16:59.095591 kubelet[2747]: I0913 00:16:59.095522 2747 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:16:59.095780 kubelet[2747]: I0913 00:16:59.095722 2747 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:16:59.096526 kubelet[2747]: I0913 00:16:59.096489 2747 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:16:59.098904 kubelet[2747]: I0913 00:16:59.097042 2747 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:16:59.100126 kubelet[2747]: I0913 00:16:59.100099 2747 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:16:59.100179 kubelet[2747]: I0913 00:16:59.100144 2747 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:16:59.100266 kubelet[2747]: I0913 00:16:59.100226 2747 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:16:59.100300 kubelet[2747]: E0913 00:16:59.100275 2747 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:59.100712 kubelet[2747]: I0913 00:16:59.100643 2747 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:16:59.100794 kubelet[2747]: I0913 00:16:59.100771 2747 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:16:59.100890 kubelet[2747]: I0913 00:16:59.100866 2747 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:16:59.105512 kubelet[2747]: I0913 00:16:59.105488 2747 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:16:59.108368 kubelet[2747]: E0913 00:16:59.108339 2747 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:16:59.115720 kubelet[2747]: I0913 00:16:59.115659 2747 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:16:59.117434 kubelet[2747]: I0913 00:16:59.117383 2747 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:16:59.117434 kubelet[2747]: I0913 00:16:59.117413 2747 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:16:59.117524 kubelet[2747]: I0913 00:16:59.117453 2747 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:16:59.117524 kubelet[2747]: I0913 00:16:59.117464 2747 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:16:59.117590 kubelet[2747]: E0913 00:16:59.117521 2747 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:16:59.153341 kubelet[2747]: I0913 00:16:59.153298 2747 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:16:59.153341 kubelet[2747]: I0913 00:16:59.153322 2747 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:16:59.153341 kubelet[2747]: I0913 00:16:59.153343 2747 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:16:59.153518 kubelet[2747]: I0913 00:16:59.153506 2747 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:16:59.153544 kubelet[2747]: I0913 00:16:59.153517 2747 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:16:59.153544 kubelet[2747]: I0913 00:16:59.153537 2747 policy_none.go:49] "None policy: Start" Sep 13 00:16:59.153583 kubelet[2747]: I0913 00:16:59.153547 2747 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:16:59.153583 kubelet[2747]: I0913 00:16:59.153558 2747 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:16:59.153676 kubelet[2747]: I0913 00:16:59.153660 2747 state_mem.go:75] "Updated machine memory state" Sep 13 00:16:59.158397 kubelet[2747]: I0913 00:16:59.158365 2747 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:16:59.158564 kubelet[2747]: I0913 00:16:59.158541 2747 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:16:59.158591 kubelet[2747]: I0913 00:16:59.158557 2747 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:16:59.159181 kubelet[2747]: I0913 00:16:59.159142 2747 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:16:59.160231 kubelet[2747]: E0913 00:16:59.160013 2747 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:16:59.218823 kubelet[2747]: I0913 00:16:59.218794 2747 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:16:59.219221 kubelet[2747]: I0913 00:16:59.218991 2747 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:59.219409 kubelet[2747]: I0913 00:16:59.219189 2747 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:59.265584 kubelet[2747]: I0913 00:16:59.265455 2747 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:16:59.273642 kubelet[2747]: I0913 00:16:59.273597 2747 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 13 00:16:59.273841 kubelet[2747]: I0913 00:16:59.273672 2747 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:16:59.401582 kubelet[2747]: I0913 00:16:59.401525 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7de1c1e7a9f64db81235d31c1bd9bca7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7de1c1e7a9f64db81235d31c1bd9bca7\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:59.401582 kubelet[2747]: I0913 00:16:59.401571 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7de1c1e7a9f64db81235d31c1bd9bca7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7de1c1e7a9f64db81235d31c1bd9bca7\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:59.401582 kubelet[2747]: I0913 00:16:59.401592 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7de1c1e7a9f64db81235d31c1bd9bca7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7de1c1e7a9f64db81235d31c1bd9bca7\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:59.401798 kubelet[2747]: I0913 00:16:59.401616 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:59.401798 kubelet[2747]: I0913 00:16:59.401632 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:59.401798 kubelet[2747]: I0913 00:16:59.401650 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:59.401798 kubelet[2747]: I0913 00:16:59.401666 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:16:59.401798 kubelet[2747]: I0913 00:16:59.401703 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:59.401939 kubelet[2747]: I0913 00:16:59.401725 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:59.525280 kubelet[2747]: E0913 00:16:59.525150 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:59.527576 kubelet[2747]: E0913 00:16:59.527427 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:59.527576 kubelet[2747]: E0913 00:16:59.527427 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:00.092176 kubelet[2747]: I0913 00:17:00.092093 2747 apiserver.go:52] "Watching apiserver" Sep 13 00:17:00.101455 kubelet[2747]: I0913 00:17:00.101406 2747 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:17:00.136520 kubelet[2747]: E0913 00:17:00.136474 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:00.137179 kubelet[2747]: I0913 00:17:00.137131 2747 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:00.137906 kubelet[2747]: E0913 00:17:00.137496 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:00.143811 kubelet[2747]: E0913 00:17:00.143784 2747 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:00.143947 kubelet[2747]: E0913 00:17:00.143926 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:00.177944 kubelet[2747]: I0913 00:17:00.177518 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.177491597 podStartE2EDuration="1.177491597s" podCreationTimestamp="2025-09-13 00:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:17:00.177279005 +0000 UTC m=+1.153145524" watchObservedRunningTime="2025-09-13 00:17:00.177491597 +0000 UTC m=+1.153358116" Sep 13 00:17:00.177944 kubelet[2747]: I0913 00:17:00.177655 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.17765024 podStartE2EDuration="1.17765024s" podCreationTimestamp="2025-09-13 00:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:17:00.162153575 +0000 UTC m=+1.138020094" watchObservedRunningTime="2025-09-13 00:17:00.17765024 +0000 UTC m=+1.153516759" Sep 13 00:17:00.185994 kubelet[2747]: I0913 00:17:00.185931 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.185893593 podStartE2EDuration="1.185893593s" podCreationTimestamp="2025-09-13 00:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:17:00.185629708 +0000 UTC m=+1.161496217" watchObservedRunningTime="2025-09-13 00:17:00.185893593 +0000 UTC m=+1.161760113" Sep 13 00:17:01.137701 kubelet[2747]: E0913 00:17:01.137658 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:01.138207 kubelet[2747]: E0913 00:17:01.137869 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:02.139525 kubelet[2747]: E0913 00:17:02.139469 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:03.140706 kubelet[2747]: E0913 00:17:03.140660 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:04.694543 kubelet[2747]: E0913 00:17:04.692060 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:04.718114 kubelet[2747]: I0913 00:17:04.718056 2747 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:17:04.720934 containerd[1557]: time="2025-09-13T00:17:04.718470893Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:17:04.721335 kubelet[2747]: I0913 00:17:04.718744 2747 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:17:05.143282 kubelet[2747]: E0913 00:17:05.143241 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:05.441474 kubelet[2747]: I0913 00:17:05.441206 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c3eaf183-5915-47d6-bc4c-9ba4b51c1f9d-kube-proxy\") pod \"kube-proxy-kzcqz\" (UID: \"c3eaf183-5915-47d6-bc4c-9ba4b51c1f9d\") " pod="kube-system/kube-proxy-kzcqz" Sep 13 00:17:05.441474 kubelet[2747]: I0913 00:17:05.441259 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3eaf183-5915-47d6-bc4c-9ba4b51c1f9d-xtables-lock\") pod \"kube-proxy-kzcqz\" (UID: \"c3eaf183-5915-47d6-bc4c-9ba4b51c1f9d\") " pod="kube-system/kube-proxy-kzcqz" Sep 13 00:17:05.441474 kubelet[2747]: I0913 00:17:05.441293 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6642g\" (UniqueName: \"kubernetes.io/projected/c3eaf183-5915-47d6-bc4c-9ba4b51c1f9d-kube-api-access-6642g\") pod \"kube-proxy-kzcqz\" (UID: \"c3eaf183-5915-47d6-bc4c-9ba4b51c1f9d\") " pod="kube-system/kube-proxy-kzcqz" Sep 13 00:17:05.441474 kubelet[2747]: I0913 00:17:05.441324 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3eaf183-5915-47d6-bc4c-9ba4b51c1f9d-lib-modules\") pod \"kube-proxy-kzcqz\" (UID: \"c3eaf183-5915-47d6-bc4c-9ba4b51c1f9d\") " pod="kube-system/kube-proxy-kzcqz" Sep 13 00:17:05.442310 systemd[1]: Created slice kubepods-besteffort-podc3eaf183_5915_47d6_bc4c_9ba4b51c1f9d.slice - libcontainer container kubepods-besteffort-podc3eaf183_5915_47d6_bc4c_9ba4b51c1f9d.slice. Sep 13 00:17:05.759473 kubelet[2747]: E0913 00:17:05.759433 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:05.760044 containerd[1557]: time="2025-09-13T00:17:05.759982935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kzcqz,Uid:c3eaf183-5915-47d6-bc4c-9ba4b51c1f9d,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:05.787159 containerd[1557]: time="2025-09-13T00:17:05.787103670Z" level=info msg="connecting to shim e4f90aa380e80ddcb906d619e62d5252ba09625f25f5b634e3ced74cdfb8f3f9" address="unix:///run/containerd/s/c05fee7992cddcfd6296cf896a7bd57d0ed084bda399927b12446164c43feb63" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:17:05.812536 systemd[1]: Created slice kubepods-besteffort-podbc7e942a_82c7_4831_99bb_f587bd68a24f.slice - libcontainer container kubepods-besteffort-podbc7e942a_82c7_4831_99bb_f587bd68a24f.slice. Sep 13 00:17:05.833048 systemd[1]: Started cri-containerd-e4f90aa380e80ddcb906d619e62d5252ba09625f25f5b634e3ced74cdfb8f3f9.scope - libcontainer container e4f90aa380e80ddcb906d619e62d5252ba09625f25f5b634e3ced74cdfb8f3f9. Sep 13 00:17:05.843116 kubelet[2747]: I0913 00:17:05.843019 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bc7e942a-82c7-4831-99bb-f587bd68a24f-var-lib-calico\") pod \"tigera-operator-755d956888-gz9b5\" (UID: \"bc7e942a-82c7-4831-99bb-f587bd68a24f\") " pod="tigera-operator/tigera-operator-755d956888-gz9b5" Sep 13 00:17:05.843116 kubelet[2747]: I0913 00:17:05.843061 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp88b\" (UniqueName: \"kubernetes.io/projected/bc7e942a-82c7-4831-99bb-f587bd68a24f-kube-api-access-pp88b\") pod \"tigera-operator-755d956888-gz9b5\" (UID: \"bc7e942a-82c7-4831-99bb-f587bd68a24f\") " pod="tigera-operator/tigera-operator-755d956888-gz9b5" Sep 13 00:17:05.863098 containerd[1557]: time="2025-09-13T00:17:05.863053910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kzcqz,Uid:c3eaf183-5915-47d6-bc4c-9ba4b51c1f9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4f90aa380e80ddcb906d619e62d5252ba09625f25f5b634e3ced74cdfb8f3f9\"" Sep 13 00:17:05.864658 kubelet[2747]: E0913 00:17:05.864518 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:05.867426 containerd[1557]: time="2025-09-13T00:17:05.867390140Z" level=info msg="CreateContainer within sandbox \"e4f90aa380e80ddcb906d619e62d5252ba09625f25f5b634e3ced74cdfb8f3f9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:17:05.880931 containerd[1557]: time="2025-09-13T00:17:05.878810914Z" level=info msg="Container 9dbf42ba1ced8aa7b3d3efb1155af53d14d65cf2dee74d8f91d2766d9038a260: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:17:05.883053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount225021003.mount: Deactivated successfully. Sep 13 00:17:05.887984 containerd[1557]: time="2025-09-13T00:17:05.887945381Z" level=info msg="CreateContainer within sandbox \"e4f90aa380e80ddcb906d619e62d5252ba09625f25f5b634e3ced74cdfb8f3f9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9dbf42ba1ced8aa7b3d3efb1155af53d14d65cf2dee74d8f91d2766d9038a260\"" Sep 13 00:17:05.888538 containerd[1557]: time="2025-09-13T00:17:05.888496354Z" level=info msg="StartContainer for \"9dbf42ba1ced8aa7b3d3efb1155af53d14d65cf2dee74d8f91d2766d9038a260\"" Sep 13 00:17:05.890452 containerd[1557]: time="2025-09-13T00:17:05.890407701Z" level=info msg="connecting to shim 9dbf42ba1ced8aa7b3d3efb1155af53d14d65cf2dee74d8f91d2766d9038a260" address="unix:///run/containerd/s/c05fee7992cddcfd6296cf896a7bd57d0ed084bda399927b12446164c43feb63" protocol=ttrpc version=3 Sep 13 00:17:05.921204 systemd[1]: Started cri-containerd-9dbf42ba1ced8aa7b3d3efb1155af53d14d65cf2dee74d8f91d2766d9038a260.scope - libcontainer container 9dbf42ba1ced8aa7b3d3efb1155af53d14d65cf2dee74d8f91d2766d9038a260. Sep 13 00:17:05.974479 containerd[1557]: time="2025-09-13T00:17:05.974419619Z" level=info msg="StartContainer for \"9dbf42ba1ced8aa7b3d3efb1155af53d14d65cf2dee74d8f91d2766d9038a260\" returns successfully" Sep 13 00:17:06.006408 kubelet[2747]: E0913 00:17:06.006360 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:06.116171 containerd[1557]: time="2025-09-13T00:17:06.116015459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-gz9b5,Uid:bc7e942a-82c7-4831-99bb-f587bd68a24f,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:17:06.135587 containerd[1557]: time="2025-09-13T00:17:06.135530977Z" level=info msg="connecting to shim cec9ef08cc719bbec5acad5a3f4f41f5fff443abdcc1ea6203738370a17f25aa" address="unix:///run/containerd/s/b70d48a83bc5e8952e5f5586d5fe6f2b58037a4abb5364aebcade16ed9db5787" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:17:06.162229 systemd[1]: Started cri-containerd-cec9ef08cc719bbec5acad5a3f4f41f5fff443abdcc1ea6203738370a17f25aa.scope - libcontainer container cec9ef08cc719bbec5acad5a3f4f41f5fff443abdcc1ea6203738370a17f25aa. Sep 13 00:17:06.165004 kubelet[2747]: E0913 00:17:06.164972 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:06.165548 kubelet[2747]: E0913 00:17:06.165505 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:06.167424 kubelet[2747]: E0913 00:17:06.167392 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:06.195152 kubelet[2747]: I0913 00:17:06.195039 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kzcqz" podStartSLOduration=1.195009351 podStartE2EDuration="1.195009351s" podCreationTimestamp="2025-09-13 00:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:17:06.194525505 +0000 UTC m=+7.170392034" watchObservedRunningTime="2025-09-13 00:17:06.195009351 +0000 UTC m=+7.170875870" Sep 13 00:17:06.220549 containerd[1557]: time="2025-09-13T00:17:06.220508226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-gz9b5,Uid:bc7e942a-82c7-4831-99bb-f587bd68a24f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cec9ef08cc719bbec5acad5a3f4f41f5fff443abdcc1ea6203738370a17f25aa\"" Sep 13 00:17:06.222434 containerd[1557]: time="2025-09-13T00:17:06.222379583Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:17:09.311162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount396638938.mount: Deactivated successfully. Sep 13 00:17:11.773542 containerd[1557]: time="2025-09-13T00:17:11.773470436Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:11.774184 containerd[1557]: time="2025-09-13T00:17:11.774142163Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:17:11.775299 containerd[1557]: time="2025-09-13T00:17:11.775230313Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:11.777614 containerd[1557]: time="2025-09-13T00:17:11.777570575Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:11.778207 containerd[1557]: time="2025-09-13T00:17:11.778162674Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 5.555724474s" Sep 13 00:17:11.778207 containerd[1557]: time="2025-09-13T00:17:11.778197434Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:17:11.780346 containerd[1557]: time="2025-09-13T00:17:11.780312370Z" level=info msg="CreateContainer within sandbox \"cec9ef08cc719bbec5acad5a3f4f41f5fff443abdcc1ea6203738370a17f25aa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:17:11.789587 containerd[1557]: time="2025-09-13T00:17:11.789529657Z" level=info msg="Container 7a4ab8e745c29f40b0208a3bff1208ae26ec8779295bbea31a81f1cfb78fd3e5: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:17:11.795956 containerd[1557]: time="2025-09-13T00:17:11.795923156Z" level=info msg="CreateContainer within sandbox \"cec9ef08cc719bbec5acad5a3f4f41f5fff443abdcc1ea6203738370a17f25aa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7a4ab8e745c29f40b0208a3bff1208ae26ec8779295bbea31a81f1cfb78fd3e5\"" Sep 13 00:17:11.796498 containerd[1557]: time="2025-09-13T00:17:11.796385829Z" level=info msg="StartContainer for \"7a4ab8e745c29f40b0208a3bff1208ae26ec8779295bbea31a81f1cfb78fd3e5\"" Sep 13 00:17:11.797350 containerd[1557]: time="2025-09-13T00:17:11.797288422Z" level=info msg="connecting to shim 7a4ab8e745c29f40b0208a3bff1208ae26ec8779295bbea31a81f1cfb78fd3e5" address="unix:///run/containerd/s/b70d48a83bc5e8952e5f5586d5fe6f2b58037a4abb5364aebcade16ed9db5787" protocol=ttrpc version=3 Sep 13 00:17:11.860057 systemd[1]: Started cri-containerd-7a4ab8e745c29f40b0208a3bff1208ae26ec8779295bbea31a81f1cfb78fd3e5.scope - libcontainer container 7a4ab8e745c29f40b0208a3bff1208ae26ec8779295bbea31a81f1cfb78fd3e5. Sep 13 00:17:11.893020 containerd[1557]: time="2025-09-13T00:17:11.892974396Z" level=info msg="StartContainer for \"7a4ab8e745c29f40b0208a3bff1208ae26ec8779295bbea31a81f1cfb78fd3e5\" returns successfully" Sep 13 00:17:12.186974 kubelet[2747]: I0913 00:17:12.186692 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-gz9b5" podStartSLOduration=1.629314104 podStartE2EDuration="7.186670949s" podCreationTimestamp="2025-09-13 00:17:05 +0000 UTC" firstStartedPulling="2025-09-13 00:17:06.221702541 +0000 UTC m=+7.197569060" lastFinishedPulling="2025-09-13 00:17:11.779059386 +0000 UTC m=+12.754925905" observedRunningTime="2025-09-13 00:17:12.185947703 +0000 UTC m=+13.161814223" watchObservedRunningTime="2025-09-13 00:17:12.186670949 +0000 UTC m=+13.162537468" Sep 13 00:17:12.803097 kubelet[2747]: E0913 00:17:12.803051 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:13.180415 kubelet[2747]: E0913 00:17:13.180223 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:17.352836 sudo[1781]: pam_unix(sudo:session): session closed for user root Sep 13 00:17:17.354698 sshd[1780]: Connection closed by 10.0.0.1 port 52096 Sep 13 00:17:17.356939 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:17.364546 systemd[1]: sshd@8-10.0.0.20:22-10.0.0.1:52096.service: Deactivated successfully. Sep 13 00:17:17.367133 systemd-logind[1543]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:17:17.368168 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:17:17.368650 systemd[1]: session-9.scope: Consumed 5.635s CPU time, 226.3M memory peak. Sep 13 00:17:17.374420 systemd-logind[1543]: Removed session 9. Sep 13 00:17:20.191265 systemd[1]: Created slice kubepods-besteffort-pod7e737b48_d7a5_472d_ac00_f70c07f11ec6.slice - libcontainer container kubepods-besteffort-pod7e737b48_d7a5_472d_ac00_f70c07f11ec6.slice. Sep 13 00:17:20.239787 kubelet[2747]: I0913 00:17:20.239678 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e737b48-d7a5-472d-ac00-f70c07f11ec6-tigera-ca-bundle\") pod \"calico-typha-7ff9b66cf7-6mqwx\" (UID: \"7e737b48-d7a5-472d-ac00-f70c07f11ec6\") " pod="calico-system/calico-typha-7ff9b66cf7-6mqwx" Sep 13 00:17:20.240332 kubelet[2747]: I0913 00:17:20.239837 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7e737b48-d7a5-472d-ac00-f70c07f11ec6-typha-certs\") pod \"calico-typha-7ff9b66cf7-6mqwx\" (UID: \"7e737b48-d7a5-472d-ac00-f70c07f11ec6\") " pod="calico-system/calico-typha-7ff9b66cf7-6mqwx" Sep 13 00:17:20.240332 kubelet[2747]: I0913 00:17:20.239983 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mbrn\" (UniqueName: \"kubernetes.io/projected/7e737b48-d7a5-472d-ac00-f70c07f11ec6-kube-api-access-8mbrn\") pod \"calico-typha-7ff9b66cf7-6mqwx\" (UID: \"7e737b48-d7a5-472d-ac00-f70c07f11ec6\") " pod="calico-system/calico-typha-7ff9b66cf7-6mqwx" Sep 13 00:17:20.434444 systemd[1]: Created slice kubepods-besteffort-pod484778c2_c505_4107_99b8_7884442213ec.slice - libcontainer container kubepods-besteffort-pod484778c2_c505_4107_99b8_7884442213ec.slice. Sep 13 00:17:20.443044 kubelet[2747]: I0913 00:17:20.442157 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/484778c2-c505-4107-99b8-7884442213ec-cni-net-dir\") pod \"calico-node-vmn7t\" (UID: \"484778c2-c505-4107-99b8-7884442213ec\") " pod="calico-system/calico-node-vmn7t" Sep 13 00:17:20.443044 kubelet[2747]: I0913 00:17:20.442218 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/484778c2-c505-4107-99b8-7884442213ec-cni-bin-dir\") pod \"calico-node-vmn7t\" (UID: \"484778c2-c505-4107-99b8-7884442213ec\") " pod="calico-system/calico-node-vmn7t" Sep 13 00:17:20.443044 kubelet[2747]: I0913 00:17:20.442388 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/484778c2-c505-4107-99b8-7884442213ec-flexvol-driver-host\") pod \"calico-node-vmn7t\" (UID: \"484778c2-c505-4107-99b8-7884442213ec\") " pod="calico-system/calico-node-vmn7t" Sep 13 00:17:20.443044 kubelet[2747]: I0913 00:17:20.442412 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/484778c2-c505-4107-99b8-7884442213ec-cni-log-dir\") pod \"calico-node-vmn7t\" (UID: \"484778c2-c505-4107-99b8-7884442213ec\") " pod="calico-system/calico-node-vmn7t" Sep 13 00:17:20.443044 kubelet[2747]: I0913 00:17:20.442436 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/484778c2-c505-4107-99b8-7884442213ec-lib-modules\") pod \"calico-node-vmn7t\" (UID: \"484778c2-c505-4107-99b8-7884442213ec\") " pod="calico-system/calico-node-vmn7t" Sep 13 00:17:20.500369 kubelet[2747]: E0913 00:17:20.500317 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:20.501157 containerd[1557]: time="2025-09-13T00:17:20.501111661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7ff9b66cf7-6mqwx,Uid:7e737b48-d7a5-472d-ac00-f70c07f11ec6,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:20.543953 kubelet[2747]: I0913 00:17:20.543128 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/484778c2-c505-4107-99b8-7884442213ec-tigera-ca-bundle\") pod \"calico-node-vmn7t\" (UID: \"484778c2-c505-4107-99b8-7884442213ec\") " pod="calico-system/calico-node-vmn7t" Sep 13 00:17:20.543953 kubelet[2747]: I0913 00:17:20.543195 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/484778c2-c505-4107-99b8-7884442213ec-var-run-calico\") pod \"calico-node-vmn7t\" (UID: \"484778c2-c505-4107-99b8-7884442213ec\") " pod="calico-system/calico-node-vmn7t" Sep 13 00:17:20.543953 kubelet[2747]: I0913 00:17:20.543235 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/484778c2-c505-4107-99b8-7884442213ec-var-lib-calico\") pod \"calico-node-vmn7t\" (UID: \"484778c2-c505-4107-99b8-7884442213ec\") " pod="calico-system/calico-node-vmn7t" Sep 13 00:17:20.543953 kubelet[2747]: I0913 00:17:20.543273 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/484778c2-c505-4107-99b8-7884442213ec-xtables-lock\") pod \"calico-node-vmn7t\" (UID: \"484778c2-c505-4107-99b8-7884442213ec\") " pod="calico-system/calico-node-vmn7t" Sep 13 00:17:20.543953 kubelet[2747]: I0913 00:17:20.543294 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4hrq\" (UniqueName: \"kubernetes.io/projected/484778c2-c505-4107-99b8-7884442213ec-kube-api-access-z4hrq\") pod \"calico-node-vmn7t\" (UID: \"484778c2-c505-4107-99b8-7884442213ec\") " pod="calico-system/calico-node-vmn7t" Sep 13 00:17:20.544292 kubelet[2747]: I0913 00:17:20.543313 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/484778c2-c505-4107-99b8-7884442213ec-policysync\") pod \"calico-node-vmn7t\" (UID: \"484778c2-c505-4107-99b8-7884442213ec\") " pod="calico-system/calico-node-vmn7t" Sep 13 00:17:20.544292 kubelet[2747]: I0913 00:17:20.543341 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/484778c2-c505-4107-99b8-7884442213ec-node-certs\") pod \"calico-node-vmn7t\" (UID: \"484778c2-c505-4107-99b8-7884442213ec\") " pod="calico-system/calico-node-vmn7t" Sep 13 00:17:20.548712 kubelet[2747]: E0913 00:17:20.548595 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.548712 kubelet[2747]: W0913 00:17:20.548631 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.548712 kubelet[2747]: E0913 00:17:20.548682 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.580939 containerd[1557]: time="2025-09-13T00:17:20.580847094Z" level=info msg="connecting to shim 5a9a64175622c7a0b8faec802c1903add7cbc422cebdbd265cadd0d4b099cf35" address="unix:///run/containerd/s/76dd0a4751b437517897be75c74a1614d32635ea7515801765c990fb8c443b3f" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:17:20.644298 kubelet[2747]: E0913 00:17:20.644237 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.644298 kubelet[2747]: W0913 00:17:20.644265 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.644298 kubelet[2747]: E0913 00:17:20.644292 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.644584 kubelet[2747]: E0913 00:17:20.644508 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.644584 kubelet[2747]: W0913 00:17:20.644517 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.644584 kubelet[2747]: E0913 00:17:20.644525 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.644748 kubelet[2747]: E0913 00:17:20.644731 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.644748 kubelet[2747]: W0913 00:17:20.644745 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.644748 kubelet[2747]: E0913 00:17:20.644754 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.645078 systemd[1]: Started cri-containerd-5a9a64175622c7a0b8faec802c1903add7cbc422cebdbd265cadd0d4b099cf35.scope - libcontainer container 5a9a64175622c7a0b8faec802c1903add7cbc422cebdbd265cadd0d4b099cf35. Sep 13 00:17:20.646454 kubelet[2747]: E0913 00:17:20.646402 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.646454 kubelet[2747]: W0913 00:17:20.646449 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.646560 kubelet[2747]: E0913 00:17:20.646463 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.646950 kubelet[2747]: E0913 00:17:20.646927 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.646950 kubelet[2747]: W0913 00:17:20.646945 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.647096 kubelet[2747]: E0913 00:17:20.647070 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.647425 kubelet[2747]: E0913 00:17:20.647404 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.647425 kubelet[2747]: W0913 00:17:20.647417 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.647560 kubelet[2747]: E0913 00:17:20.647538 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.647804 kubelet[2747]: E0913 00:17:20.647775 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.647804 kubelet[2747]: W0913 00:17:20.647795 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.647890 kubelet[2747]: E0913 00:17:20.647871 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.649212 kubelet[2747]: E0913 00:17:20.649187 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.649212 kubelet[2747]: W0913 00:17:20.649207 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.649403 kubelet[2747]: E0913 00:17:20.649368 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.649500 kubelet[2747]: E0913 00:17:20.649456 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.649500 kubelet[2747]: W0913 00:17:20.649463 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.649666 kubelet[2747]: E0913 00:17:20.649528 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.649758 kubelet[2747]: E0913 00:17:20.649722 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.649758 kubelet[2747]: W0913 00:17:20.649738 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.650002 kubelet[2747]: E0913 00:17:20.649823 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.650178 kubelet[2747]: E0913 00:17:20.650157 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.650178 kubelet[2747]: W0913 00:17:20.650171 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.650369 kubelet[2747]: E0913 00:17:20.650281 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.650537 kubelet[2747]: E0913 00:17:20.650517 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.650537 kubelet[2747]: W0913 00:17:20.650532 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.650685 kubelet[2747]: E0913 00:17:20.650659 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.650898 kubelet[2747]: E0913 00:17:20.650866 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.651058 kubelet[2747]: W0913 00:17:20.650905 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.651058 kubelet[2747]: E0913 00:17:20.651050 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.651547 kubelet[2747]: E0913 00:17:20.651498 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.651547 kubelet[2747]: W0913 00:17:20.651543 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.651651 kubelet[2747]: E0913 00:17:20.651619 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.651959 kubelet[2747]: E0913 00:17:20.651900 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.652312 kubelet[2747]: W0913 00:17:20.651992 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.652312 kubelet[2747]: E0913 00:17:20.652146 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.652527 kubelet[2747]: E0913 00:17:20.652506 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.652589 kubelet[2747]: W0913 00:17:20.652519 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.652618 kubelet[2747]: E0913 00:17:20.652598 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.652938 kubelet[2747]: E0913 00:17:20.652899 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.652995 kubelet[2747]: W0913 00:17:20.652962 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.653117 kubelet[2747]: E0913 00:17:20.653049 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.653337 kubelet[2747]: E0913 00:17:20.653297 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.653337 kubelet[2747]: W0913 00:17:20.653313 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.653459 kubelet[2747]: E0913 00:17:20.653441 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.653631 kubelet[2747]: E0913 00:17:20.653614 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.653631 kubelet[2747]: W0913 00:17:20.653627 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.653780 kubelet[2747]: E0913 00:17:20.653760 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.654089 kubelet[2747]: E0913 00:17:20.654069 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.654089 kubelet[2747]: W0913 00:17:20.654083 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.654151 kubelet[2747]: E0913 00:17:20.654133 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.654401 kubelet[2747]: E0913 00:17:20.654382 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.654401 kubelet[2747]: W0913 00:17:20.654395 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.654475 kubelet[2747]: E0913 00:17:20.654445 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.654852 kubelet[2747]: E0913 00:17:20.654788 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.654852 kubelet[2747]: W0913 00:17:20.654810 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.654985 kubelet[2747]: E0913 00:17:20.654965 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.655271 kubelet[2747]: E0913 00:17:20.655251 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.655316 kubelet[2747]: W0913 00:17:20.655265 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.655403 kubelet[2747]: E0913 00:17:20.655386 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.655650 kubelet[2747]: E0913 00:17:20.655625 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.655703 kubelet[2747]: W0913 00:17:20.655641 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.655788 kubelet[2747]: E0913 00:17:20.655765 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.656069 kubelet[2747]: E0913 00:17:20.656046 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.656069 kubelet[2747]: W0913 00:17:20.656062 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.656261 kubelet[2747]: E0913 00:17:20.656234 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.656542 kubelet[2747]: E0913 00:17:20.656521 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.656542 kubelet[2747]: W0913 00:17:20.656534 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.656634 kubelet[2747]: E0913 00:17:20.656578 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.657835 kubelet[2747]: E0913 00:17:20.657811 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.657835 kubelet[2747]: W0913 00:17:20.657826 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.658117 kubelet[2747]: E0913 00:17:20.658095 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.658857 kubelet[2747]: E0913 00:17:20.658815 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.658857 kubelet[2747]: W0913 00:17:20.658831 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.659018 kubelet[2747]: E0913 00:17:20.658987 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.659175 kubelet[2747]: E0913 00:17:20.659145 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.659175 kubelet[2747]: W0913 00:17:20.659158 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.659320 kubelet[2747]: E0913 00:17:20.659274 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.659777 kubelet[2747]: E0913 00:17:20.659754 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.659777 kubelet[2747]: W0913 00:17:20.659770 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.659974 kubelet[2747]: E0913 00:17:20.659886 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.660190 kubelet[2747]: E0913 00:17:20.660163 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.660190 kubelet[2747]: W0913 00:17:20.660182 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.660307 kubelet[2747]: E0913 00:17:20.660272 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.660596 kubelet[2747]: E0913 00:17:20.660572 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.660596 kubelet[2747]: W0913 00:17:20.660586 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.660760 kubelet[2747]: E0913 00:17:20.660702 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.661122 kubelet[2747]: E0913 00:17:20.661100 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.661122 kubelet[2747]: W0913 00:17:20.661114 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.661491 kubelet[2747]: E0913 00:17:20.661346 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.661818 kubelet[2747]: E0913 00:17:20.661737 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.661818 kubelet[2747]: W0913 00:17:20.661765 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.662239 kubelet[2747]: E0913 00:17:20.662147 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.662291 kubelet[2747]: E0913 00:17:20.662263 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.662291 kubelet[2747]: W0913 00:17:20.662275 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.662291 kubelet[2747]: E0913 00:17:20.662291 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.662631 kubelet[2747]: E0913 00:17:20.662610 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.662631 kubelet[2747]: W0913 00:17:20.662625 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.662746 kubelet[2747]: E0913 00:17:20.662637 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.676125 kubelet[2747]: E0913 00:17:20.676075 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.676125 kubelet[2747]: W0913 00:17:20.676110 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.676268 kubelet[2747]: E0913 00:17:20.676137 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.684456 kubelet[2747]: E0913 00:17:20.684396 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.684456 kubelet[2747]: W0913 00:17:20.684443 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.684456 kubelet[2747]: E0913 00:17:20.684464 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.685051 kubelet[2747]: E0913 00:17:20.685006 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxjn8" podUID="9e101dcf-ec95-4d86-b186-db61bd6330ed" Sep 13 00:17:20.742300 containerd[1557]: time="2025-09-13T00:17:20.742211312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vmn7t,Uid:484778c2-c505-4107-99b8-7884442213ec,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:20.745234 kubelet[2747]: E0913 00:17:20.745176 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.745234 kubelet[2747]: W0913 00:17:20.745215 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.745399 kubelet[2747]: E0913 00:17:20.745246 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.746325 kubelet[2747]: E0913 00:17:20.746288 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.746325 kubelet[2747]: W0913 00:17:20.746308 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.746325 kubelet[2747]: E0913 00:17:20.746322 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.747851 kubelet[2747]: E0913 00:17:20.747788 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.748042 kubelet[2747]: W0913 00:17:20.747851 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.748042 kubelet[2747]: E0913 00:17:20.747888 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.748321 kubelet[2747]: E0913 00:17:20.748278 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.748321 kubelet[2747]: W0913 00:17:20.748293 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.748321 kubelet[2747]: E0913 00:17:20.748305 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.748655 kubelet[2747]: E0913 00:17:20.748611 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.748655 kubelet[2747]: W0913 00:17:20.748627 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.748655 kubelet[2747]: E0913 00:17:20.748638 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.750159 kubelet[2747]: E0913 00:17:20.750124 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.750159 kubelet[2747]: W0913 00:17:20.750142 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.750159 kubelet[2747]: E0913 00:17:20.750154 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.750464 kubelet[2747]: E0913 00:17:20.750444 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.750464 kubelet[2747]: W0913 00:17:20.750459 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.750617 kubelet[2747]: E0913 00:17:20.750471 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.751026 kubelet[2747]: E0913 00:17:20.751003 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.751026 kubelet[2747]: W0913 00:17:20.751019 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.751096 kubelet[2747]: E0913 00:17:20.751031 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.751927 kubelet[2747]: E0913 00:17:20.751890 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.751983 kubelet[2747]: W0913 00:17:20.751944 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.751983 kubelet[2747]: E0913 00:17:20.751957 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.752222 kubelet[2747]: E0913 00:17:20.752146 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.752222 kubelet[2747]: W0913 00:17:20.752164 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.752222 kubelet[2747]: E0913 00:17:20.752175 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.752695 kubelet[2747]: E0913 00:17:20.752672 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.752695 kubelet[2747]: W0913 00:17:20.752689 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.752784 kubelet[2747]: E0913 00:17:20.752701 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.753612 kubelet[2747]: E0913 00:17:20.753579 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.753612 kubelet[2747]: W0913 00:17:20.753598 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.753612 kubelet[2747]: E0913 00:17:20.753610 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.754045 kubelet[2747]: E0913 00:17:20.754022 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.754045 kubelet[2747]: W0913 00:17:20.754041 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.754115 kubelet[2747]: E0913 00:17:20.754054 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.754388 kubelet[2747]: E0913 00:17:20.754370 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.754388 kubelet[2747]: W0913 00:17:20.754386 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.754441 kubelet[2747]: E0913 00:17:20.754397 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.754872 kubelet[2747]: E0913 00:17:20.754843 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.754872 kubelet[2747]: W0913 00:17:20.754861 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.754872 kubelet[2747]: E0913 00:17:20.754872 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.755153 kubelet[2747]: E0913 00:17:20.755134 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.755153 kubelet[2747]: W0913 00:17:20.755147 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.755208 kubelet[2747]: E0913 00:17:20.755157 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.755403 kubelet[2747]: E0913 00:17:20.755386 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.755403 kubelet[2747]: W0913 00:17:20.755399 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.755467 kubelet[2747]: E0913 00:17:20.755409 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.756138 kubelet[2747]: E0913 00:17:20.756118 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.756138 kubelet[2747]: W0913 00:17:20.756133 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.756207 kubelet[2747]: E0913 00:17:20.756144 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.756359 kubelet[2747]: E0913 00:17:20.756341 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.756359 kubelet[2747]: W0913 00:17:20.756355 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.756436 kubelet[2747]: E0913 00:17:20.756364 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.756551 kubelet[2747]: E0913 00:17:20.756532 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.756551 kubelet[2747]: W0913 00:17:20.756545 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.756603 kubelet[2747]: E0913 00:17:20.756553 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.756940 kubelet[2747]: E0913 00:17:20.756902 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.756940 kubelet[2747]: W0913 00:17:20.756936 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.757026 kubelet[2747]: E0913 00:17:20.756948 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.757026 kubelet[2747]: I0913 00:17:20.756983 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e101dcf-ec95-4d86-b186-db61bd6330ed-kubelet-dir\") pod \"csi-node-driver-wxjn8\" (UID: \"9e101dcf-ec95-4d86-b186-db61bd6330ed\") " pod="calico-system/csi-node-driver-wxjn8" Sep 13 00:17:20.757279 kubelet[2747]: E0913 00:17:20.757256 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.757324 kubelet[2747]: W0913 00:17:20.757282 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.757324 kubelet[2747]: E0913 00:17:20.757307 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.757391 kubelet[2747]: I0913 00:17:20.757330 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9e101dcf-ec95-4d86-b186-db61bd6330ed-socket-dir\") pod \"csi-node-driver-wxjn8\" (UID: \"9e101dcf-ec95-4d86-b186-db61bd6330ed\") " pod="calico-system/csi-node-driver-wxjn8" Sep 13 00:17:20.757623 kubelet[2747]: E0913 00:17:20.757605 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.757623 kubelet[2747]: W0913 00:17:20.757618 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.757674 kubelet[2747]: E0913 00:17:20.757641 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.758071 kubelet[2747]: E0913 00:17:20.757999 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.758071 kubelet[2747]: W0913 00:17:20.758016 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.758071 kubelet[2747]: E0913 00:17:20.758047 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.758300 kubelet[2747]: E0913 00:17:20.758276 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.758300 kubelet[2747]: W0913 00:17:20.758292 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.758460 kubelet[2747]: E0913 00:17:20.758314 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.758460 kubelet[2747]: I0913 00:17:20.758334 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq8wj\" (UniqueName: \"kubernetes.io/projected/9e101dcf-ec95-4d86-b186-db61bd6330ed-kube-api-access-cq8wj\") pod \"csi-node-driver-wxjn8\" (UID: \"9e101dcf-ec95-4d86-b186-db61bd6330ed\") " pod="calico-system/csi-node-driver-wxjn8" Sep 13 00:17:20.758675 kubelet[2747]: E0913 00:17:20.758634 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.758675 kubelet[2747]: W0913 00:17:20.758652 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.758985 kubelet[2747]: E0913 00:17:20.758895 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.759351 kubelet[2747]: E0913 00:17:20.759321 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.759351 kubelet[2747]: W0913 00:17:20.759336 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.759481 kubelet[2747]: E0913 00:17:20.759437 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.759766 kubelet[2747]: E0913 00:17:20.759739 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.759766 kubelet[2747]: W0913 00:17:20.759751 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.759966 kubelet[2747]: E0913 00:17:20.759868 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.759966 kubelet[2747]: I0913 00:17:20.759893 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9e101dcf-ec95-4d86-b186-db61bd6330ed-varrun\") pod \"csi-node-driver-wxjn8\" (UID: \"9e101dcf-ec95-4d86-b186-db61bd6330ed\") " pod="calico-system/csi-node-driver-wxjn8" Sep 13 00:17:20.760403 kubelet[2747]: E0913 00:17:20.760341 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.760403 kubelet[2747]: W0913 00:17:20.760354 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.760403 kubelet[2747]: E0913 00:17:20.760366 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.761130 kubelet[2747]: E0913 00:17:20.761102 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.761130 kubelet[2747]: W0913 00:17:20.761115 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.761308 kubelet[2747]: E0913 00:17:20.761239 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.761527 kubelet[2747]: E0913 00:17:20.761490 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.761527 kubelet[2747]: W0913 00:17:20.761502 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.761527 kubelet[2747]: E0913 00:17:20.761512 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.761960 kubelet[2747]: E0913 00:17:20.761944 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.762068 kubelet[2747]: W0913 00:17:20.762054 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.762137 kubelet[2747]: E0913 00:17:20.762124 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.762555 kubelet[2747]: E0913 00:17:20.762518 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.762555 kubelet[2747]: W0913 00:17:20.762530 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.762555 kubelet[2747]: E0913 00:17:20.762540 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.762685 kubelet[2747]: I0913 00:17:20.762670 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9e101dcf-ec95-4d86-b186-db61bd6330ed-registration-dir\") pod \"csi-node-driver-wxjn8\" (UID: \"9e101dcf-ec95-4d86-b186-db61bd6330ed\") " pod="calico-system/csi-node-driver-wxjn8" Sep 13 00:17:20.763289 kubelet[2747]: E0913 00:17:20.763216 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.763289 kubelet[2747]: W0913 00:17:20.763231 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.763289 kubelet[2747]: E0913 00:17:20.763243 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.763590 kubelet[2747]: E0913 00:17:20.763578 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.763667 kubelet[2747]: W0913 00:17:20.763654 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.763735 kubelet[2747]: E0913 00:17:20.763723 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.863502 kubelet[2747]: E0913 00:17:20.863454 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.863502 kubelet[2747]: W0913 00:17:20.863483 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.863502 kubelet[2747]: E0913 00:17:20.863509 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.863938 kubelet[2747]: E0913 00:17:20.863873 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.863938 kubelet[2747]: W0913 00:17:20.863932 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.864048 kubelet[2747]: E0913 00:17:20.863974 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.864323 kubelet[2747]: E0913 00:17:20.864304 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.864323 kubelet[2747]: W0913 00:17:20.864317 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.864448 kubelet[2747]: E0913 00:17:20.864335 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.864610 kubelet[2747]: E0913 00:17:20.864576 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.864610 kubelet[2747]: W0913 00:17:20.864593 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.864610 kubelet[2747]: E0913 00:17:20.864621 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.865040 kubelet[2747]: E0913 00:17:20.864892 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.865040 kubelet[2747]: W0913 00:17:20.864933 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.865040 kubelet[2747]: E0913 00:17:20.865014 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.865304 kubelet[2747]: E0913 00:17:20.865287 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.865304 kubelet[2747]: W0913 00:17:20.865301 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.865415 kubelet[2747]: E0913 00:17:20.865396 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.865535 kubelet[2747]: E0913 00:17:20.865521 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.865535 kubelet[2747]: W0913 00:17:20.865532 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.865689 kubelet[2747]: E0913 00:17:20.865612 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.865727 kubelet[2747]: E0913 00:17:20.865715 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.865752 kubelet[2747]: W0913 00:17:20.865736 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.865848 kubelet[2747]: E0913 00:17:20.865824 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.865989 kubelet[2747]: E0913 00:17:20.865974 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.865989 kubelet[2747]: W0913 00:17:20.865986 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.866078 kubelet[2747]: E0913 00:17:20.866062 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.866234 kubelet[2747]: E0913 00:17:20.866219 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.866234 kubelet[2747]: W0913 00:17:20.866230 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.866311 kubelet[2747]: E0913 00:17:20.866254 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.866459 kubelet[2747]: E0913 00:17:20.866446 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.866459 kubelet[2747]: W0913 00:17:20.866457 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.866507 kubelet[2747]: E0913 00:17:20.866488 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.866726 kubelet[2747]: E0913 00:17:20.866695 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.866726 kubelet[2747]: W0913 00:17:20.866708 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.867020 kubelet[2747]: E0913 00:17:20.866742 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.867020 kubelet[2747]: E0913 00:17:20.866997 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.867020 kubelet[2747]: W0913 00:17:20.867007 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.867163 kubelet[2747]: E0913 00:17:20.867045 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.867290 kubelet[2747]: E0913 00:17:20.867269 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.867290 kubelet[2747]: W0913 00:17:20.867282 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.867359 kubelet[2747]: E0913 00:17:20.867314 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.867470 kubelet[2747]: E0913 00:17:20.867453 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.867470 kubelet[2747]: W0913 00:17:20.867464 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.867521 kubelet[2747]: E0913 00:17:20.867487 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.867671 kubelet[2747]: E0913 00:17:20.867654 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.867671 kubelet[2747]: W0913 00:17:20.867666 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.867753 kubelet[2747]: E0913 00:17:20.867695 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.867947 kubelet[2747]: E0913 00:17:20.867856 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.867947 kubelet[2747]: W0913 00:17:20.867872 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.867947 kubelet[2747]: E0913 00:17:20.867888 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.868169 kubelet[2747]: E0913 00:17:20.868155 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.868169 kubelet[2747]: W0913 00:17:20.868167 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.868326 kubelet[2747]: E0913 00:17:20.868183 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.868516 kubelet[2747]: E0913 00:17:20.868491 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.868516 kubelet[2747]: W0913 00:17:20.868506 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.868516 kubelet[2747]: E0913 00:17:20.868523 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.868750 kubelet[2747]: E0913 00:17:20.868715 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.868750 kubelet[2747]: W0913 00:17:20.868728 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.868864 kubelet[2747]: E0913 00:17:20.868751 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.869044 kubelet[2747]: E0913 00:17:20.869026 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.869044 kubelet[2747]: W0913 00:17:20.869039 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.869110 kubelet[2747]: E0913 00:17:20.869080 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.869401 kubelet[2747]: E0913 00:17:20.869277 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.869401 kubelet[2747]: W0913 00:17:20.869296 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.869401 kubelet[2747]: E0913 00:17:20.869347 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.869697 kubelet[2747]: E0913 00:17:20.869672 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.869753 kubelet[2747]: W0913 00:17:20.869694 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.869753 kubelet[2747]: E0913 00:17:20.869723 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.870182 kubelet[2747]: E0913 00:17:20.870161 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.870224 kubelet[2747]: W0913 00:17:20.870175 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.870260 kubelet[2747]: E0913 00:17:20.870230 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.870485 kubelet[2747]: E0913 00:17:20.870453 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.870485 kubelet[2747]: W0913 00:17:20.870469 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.870485 kubelet[2747]: E0913 00:17:20.870481 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.916183 containerd[1557]: time="2025-09-13T00:17:20.916017204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7ff9b66cf7-6mqwx,Uid:7e737b48-d7a5-472d-ac00-f70c07f11ec6,Namespace:calico-system,Attempt:0,} returns sandbox id \"5a9a64175622c7a0b8faec802c1903add7cbc422cebdbd265cadd0d4b099cf35\"" Sep 13 00:17:20.917522 kubelet[2747]: E0913 00:17:20.917482 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:20.917765 kubelet[2747]: E0913 00:17:20.917489 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:20.917765 kubelet[2747]: W0913 00:17:20.917726 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:20.917765 kubelet[2747]: E0913 00:17:20.917748 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:20.918460 containerd[1557]: time="2025-09-13T00:17:20.918429639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:17:21.275173 containerd[1557]: time="2025-09-13T00:17:21.275123726Z" level=info msg="connecting to shim b1b1a0eae98d1043eabc53acca7f84cbb1026b7cd0dfabd38e44ebdc7d523bd8" address="unix:///run/containerd/s/a4b4f10fe3c3d3a7f1fff558daeb52bc8ab9b10b22c2713f8e903cb6a5ccf2d4" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:17:21.312088 systemd[1]: Started cri-containerd-b1b1a0eae98d1043eabc53acca7f84cbb1026b7cd0dfabd38e44ebdc7d523bd8.scope - libcontainer container b1b1a0eae98d1043eabc53acca7f84cbb1026b7cd0dfabd38e44ebdc7d523bd8. Sep 13 00:17:21.371187 containerd[1557]: time="2025-09-13T00:17:21.371039206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vmn7t,Uid:484778c2-c505-4107-99b8-7884442213ec,Namespace:calico-system,Attempt:0,} returns sandbox id \"b1b1a0eae98d1043eabc53acca7f84cbb1026b7cd0dfabd38e44ebdc7d523bd8\"" Sep 13 00:17:22.118901 kubelet[2747]: E0913 00:17:22.118813 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxjn8" podUID="9e101dcf-ec95-4d86-b186-db61bd6330ed" Sep 13 00:17:22.412116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount929630531.mount: Deactivated successfully. Sep 13 00:17:23.713227 containerd[1557]: time="2025-09-13T00:17:23.713181316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:23.713958 containerd[1557]: time="2025-09-13T00:17:23.713904602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:17:23.715138 containerd[1557]: time="2025-09-13T00:17:23.715099867Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:23.717409 containerd[1557]: time="2025-09-13T00:17:23.717378399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:23.717993 containerd[1557]: time="2025-09-13T00:17:23.717941504Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.799450806s" Sep 13 00:17:23.717993 containerd[1557]: time="2025-09-13T00:17:23.717989577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:17:23.718937 containerd[1557]: time="2025-09-13T00:17:23.718811415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:17:23.728127 containerd[1557]: time="2025-09-13T00:17:23.728074550Z" level=info msg="CreateContainer within sandbox \"5a9a64175622c7a0b8faec802c1903add7cbc422cebdbd265cadd0d4b099cf35\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:17:23.737204 containerd[1557]: time="2025-09-13T00:17:23.737144510Z" level=info msg="Container 0c65746129e48eb14716dbde5b82ad25025a27ac11e242ba817b90af4c9610d8: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:17:23.747528 containerd[1557]: time="2025-09-13T00:17:23.747492865Z" level=info msg="CreateContainer within sandbox \"5a9a64175622c7a0b8faec802c1903add7cbc422cebdbd265cadd0d4b099cf35\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0c65746129e48eb14716dbde5b82ad25025a27ac11e242ba817b90af4c9610d8\"" Sep 13 00:17:23.748232 containerd[1557]: time="2025-09-13T00:17:23.748010231Z" level=info msg="StartContainer for \"0c65746129e48eb14716dbde5b82ad25025a27ac11e242ba817b90af4c9610d8\"" Sep 13 00:17:23.749185 containerd[1557]: time="2025-09-13T00:17:23.749158804Z" level=info msg="connecting to shim 0c65746129e48eb14716dbde5b82ad25025a27ac11e242ba817b90af4c9610d8" address="unix:///run/containerd/s/76dd0a4751b437517897be75c74a1614d32635ea7515801765c990fb8c443b3f" protocol=ttrpc version=3 Sep 13 00:17:23.780073 systemd[1]: Started cri-containerd-0c65746129e48eb14716dbde5b82ad25025a27ac11e242ba817b90af4c9610d8.scope - libcontainer container 0c65746129e48eb14716dbde5b82ad25025a27ac11e242ba817b90af4c9610d8. Sep 13 00:17:23.834861 containerd[1557]: time="2025-09-13T00:17:23.834808396Z" level=info msg="StartContainer for \"0c65746129e48eb14716dbde5b82ad25025a27ac11e242ba817b90af4c9610d8\" returns successfully" Sep 13 00:17:24.118141 kubelet[2747]: E0913 00:17:24.118009 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxjn8" podUID="9e101dcf-ec95-4d86-b186-db61bd6330ed" Sep 13 00:17:24.215185 kubelet[2747]: E0913 00:17:24.215140 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:24.278935 kubelet[2747]: E0913 00:17:24.278872 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.278935 kubelet[2747]: W0913 00:17:24.278897 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.278935 kubelet[2747]: E0913 00:17:24.278940 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.279156 kubelet[2747]: E0913 00:17:24.279130 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.279156 kubelet[2747]: W0913 00:17:24.279139 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.279156 kubelet[2747]: E0913 00:17:24.279148 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.279346 kubelet[2747]: E0913 00:17:24.279317 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.279346 kubelet[2747]: W0913 00:17:24.279333 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.279346 kubelet[2747]: E0913 00:17:24.279346 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.279629 kubelet[2747]: E0913 00:17:24.279613 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.279629 kubelet[2747]: W0913 00:17:24.279625 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.279724 kubelet[2747]: E0913 00:17:24.279637 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.279856 kubelet[2747]: E0913 00:17:24.279827 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.279856 kubelet[2747]: W0913 00:17:24.279853 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.279965 kubelet[2747]: E0913 00:17:24.279866 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.280193 kubelet[2747]: E0913 00:17:24.280171 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.280193 kubelet[2747]: W0913 00:17:24.280183 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.280254 kubelet[2747]: E0913 00:17:24.280193 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.280381 kubelet[2747]: E0913 00:17:24.280369 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.280381 kubelet[2747]: W0913 00:17:24.280378 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.280441 kubelet[2747]: E0913 00:17:24.280387 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.280599 kubelet[2747]: E0913 00:17:24.280585 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.280599 kubelet[2747]: W0913 00:17:24.280596 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.280656 kubelet[2747]: E0913 00:17:24.280605 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.280875 kubelet[2747]: E0913 00:17:24.280853 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.280875 kubelet[2747]: W0913 00:17:24.280864 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.280875 kubelet[2747]: E0913 00:17:24.280873 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.281106 kubelet[2747]: E0913 00:17:24.281092 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.281106 kubelet[2747]: W0913 00:17:24.281102 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.281171 kubelet[2747]: E0913 00:17:24.281113 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.281315 kubelet[2747]: E0913 00:17:24.281302 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.281315 kubelet[2747]: W0913 00:17:24.281312 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.281375 kubelet[2747]: E0913 00:17:24.281321 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.281513 kubelet[2747]: E0913 00:17:24.281501 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.281513 kubelet[2747]: W0913 00:17:24.281510 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.281571 kubelet[2747]: E0913 00:17:24.281519 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.281727 kubelet[2747]: E0913 00:17:24.281715 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.281727 kubelet[2747]: W0913 00:17:24.281724 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.281785 kubelet[2747]: E0913 00:17:24.281733 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.281943 kubelet[2747]: E0913 00:17:24.281929 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.281943 kubelet[2747]: W0913 00:17:24.281940 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.282012 kubelet[2747]: E0913 00:17:24.281960 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.282160 kubelet[2747]: E0913 00:17:24.282147 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.282160 kubelet[2747]: W0913 00:17:24.282157 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.282217 kubelet[2747]: E0913 00:17:24.282166 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.291674 kubelet[2747]: E0913 00:17:24.291657 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.291674 kubelet[2747]: W0913 00:17:24.291671 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.291756 kubelet[2747]: E0913 00:17:24.291682 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.292038 kubelet[2747]: E0913 00:17:24.292010 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.292038 kubelet[2747]: W0913 00:17:24.292025 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.292133 kubelet[2747]: E0913 00:17:24.292044 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.292283 kubelet[2747]: E0913 00:17:24.292268 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.292283 kubelet[2747]: W0913 00:17:24.292281 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.292347 kubelet[2747]: E0913 00:17:24.292304 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.292532 kubelet[2747]: E0913 00:17:24.292518 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.292532 kubelet[2747]: W0913 00:17:24.292529 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.292592 kubelet[2747]: E0913 00:17:24.292544 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.292757 kubelet[2747]: E0913 00:17:24.292743 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.292757 kubelet[2747]: W0913 00:17:24.292754 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.292816 kubelet[2747]: E0913 00:17:24.292767 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.292997 kubelet[2747]: E0913 00:17:24.292979 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.292997 kubelet[2747]: W0913 00:17:24.292990 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.293068 kubelet[2747]: E0913 00:17:24.293002 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.293233 kubelet[2747]: E0913 00:17:24.293216 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.293233 kubelet[2747]: W0913 00:17:24.293227 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.293308 kubelet[2747]: E0913 00:17:24.293275 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.293404 kubelet[2747]: E0913 00:17:24.293388 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.293404 kubelet[2747]: W0913 00:17:24.293399 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.293468 kubelet[2747]: E0913 00:17:24.293425 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.293606 kubelet[2747]: E0913 00:17:24.293590 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.293606 kubelet[2747]: W0913 00:17:24.293601 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.293661 kubelet[2747]: E0913 00:17:24.293614 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.293789 kubelet[2747]: E0913 00:17:24.293770 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.293789 kubelet[2747]: W0913 00:17:24.293784 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.293855 kubelet[2747]: E0913 00:17:24.293799 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.294009 kubelet[2747]: E0913 00:17:24.293992 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.294009 kubelet[2747]: W0913 00:17:24.294003 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.294069 kubelet[2747]: E0913 00:17:24.294016 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.294210 kubelet[2747]: E0913 00:17:24.294194 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.294210 kubelet[2747]: W0913 00:17:24.294204 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.294280 kubelet[2747]: E0913 00:17:24.294217 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.294425 kubelet[2747]: E0913 00:17:24.294407 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.294425 kubelet[2747]: W0913 00:17:24.294419 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.294478 kubelet[2747]: E0913 00:17:24.294434 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.294610 kubelet[2747]: E0913 00:17:24.294595 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.294610 kubelet[2747]: W0913 00:17:24.294605 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.294677 kubelet[2747]: E0913 00:17:24.294619 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.295019 kubelet[2747]: E0913 00:17:24.294992 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.295019 kubelet[2747]: W0913 00:17:24.295012 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.295110 kubelet[2747]: E0913 00:17:24.295036 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.295234 kubelet[2747]: E0913 00:17:24.295220 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.295234 kubelet[2747]: W0913 00:17:24.295232 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.295298 kubelet[2747]: E0913 00:17:24.295246 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.295480 kubelet[2747]: E0913 00:17:24.295460 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.295480 kubelet[2747]: W0913 00:17:24.295471 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.295538 kubelet[2747]: E0913 00:17:24.295482 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.295851 kubelet[2747]: E0913 00:17:24.295831 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:17:24.295851 kubelet[2747]: W0913 00:17:24.295843 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:17:24.295937 kubelet[2747]: E0913 00:17:24.295852 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:17:24.973086 containerd[1557]: time="2025-09-13T00:17:24.973022630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:24.973679 containerd[1557]: time="2025-09-13T00:17:24.973653215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:17:24.974716 containerd[1557]: time="2025-09-13T00:17:24.974680612Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:24.976843 containerd[1557]: time="2025-09-13T00:17:24.976812094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:24.977483 containerd[1557]: time="2025-09-13T00:17:24.977457518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.25861411s" Sep 13 00:17:24.977544 containerd[1557]: time="2025-09-13T00:17:24.977490582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:17:24.979716 containerd[1557]: time="2025-09-13T00:17:24.979680848Z" level=info msg="CreateContainer within sandbox \"b1b1a0eae98d1043eabc53acca7f84cbb1026b7cd0dfabd38e44ebdc7d523bd8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:17:24.991178 containerd[1557]: time="2025-09-13T00:17:24.991138272Z" level=info msg="Container 76014ec72905fb699ba13a19d7456b1225f1824ca4253df76b30555fc58508d3: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:17:25.004058 containerd[1557]: time="2025-09-13T00:17:25.003999188Z" level=info msg="CreateContainer within sandbox \"b1b1a0eae98d1043eabc53acca7f84cbb1026b7cd0dfabd38e44ebdc7d523bd8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"76014ec72905fb699ba13a19d7456b1225f1824ca4253df76b30555fc58508d3\"" Sep 13 00:17:25.004542 containerd[1557]: time="2025-09-13T00:17:25.004510280Z" level=info msg="StartContainer for \"76014ec72905fb699ba13a19d7456b1225f1824ca4253df76b30555fc58508d3\"" Sep 13 00:17:25.006156 containerd[1557]: time="2025-09-13T00:17:25.006117921Z" level=info msg="connecting to shim 76014ec72905fb699ba13a19d7456b1225f1824ca4253df76b30555fc58508d3" address="unix:///run/containerd/s/a4b4f10fe3c3d3a7f1fff558daeb52bc8ab9b10b22c2713f8e903cb6a5ccf2d4" protocol=ttrpc version=3 Sep 13 00:17:25.042080 systemd[1]: Started cri-containerd-76014ec72905fb699ba13a19d7456b1225f1824ca4253df76b30555fc58508d3.scope - libcontainer container 76014ec72905fb699ba13a19d7456b1225f1824ca4253df76b30555fc58508d3. Sep 13 00:17:25.107165 systemd[1]: cri-containerd-76014ec72905fb699ba13a19d7456b1225f1824ca4253df76b30555fc58508d3.scope: Deactivated successfully. Sep 13 00:17:25.110149 containerd[1557]: time="2025-09-13T00:17:25.110113183Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76014ec72905fb699ba13a19d7456b1225f1824ca4253df76b30555fc58508d3\" id:\"76014ec72905fb699ba13a19d7456b1225f1824ca4253df76b30555fc58508d3\" pid:3468 exited_at:{seconds:1757722645 nanos:109536293}" Sep 13 00:17:25.340641 containerd[1557]: time="2025-09-13T00:17:25.340378905Z" level=info msg="received exit event container_id:\"76014ec72905fb699ba13a19d7456b1225f1824ca4253df76b30555fc58508d3\" id:\"76014ec72905fb699ba13a19d7456b1225f1824ca4253df76b30555fc58508d3\" pid:3468 exited_at:{seconds:1757722645 nanos:109536293}" Sep 13 00:17:25.344490 kubelet[2747]: I0913 00:17:25.344463 2747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:17:25.345311 kubelet[2747]: E0913 00:17:25.345159 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:25.350277 containerd[1557]: time="2025-09-13T00:17:25.346025028Z" level=info msg="StartContainer for \"76014ec72905fb699ba13a19d7456b1225f1824ca4253df76b30555fc58508d3\" returns successfully" Sep 13 00:17:25.375307 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76014ec72905fb699ba13a19d7456b1225f1824ca4253df76b30555fc58508d3-rootfs.mount: Deactivated successfully. Sep 13 00:17:26.117903 kubelet[2747]: E0913 00:17:26.117826 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxjn8" podUID="9e101dcf-ec95-4d86-b186-db61bd6330ed" Sep 13 00:17:26.349654 containerd[1557]: time="2025-09-13T00:17:26.349610297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:17:26.364952 kubelet[2747]: I0913 00:17:26.364603 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7ff9b66cf7-6mqwx" podStartSLOduration=3.564040497 podStartE2EDuration="6.364584479s" podCreationTimestamp="2025-09-13 00:17:20 +0000 UTC" firstStartedPulling="2025-09-13 00:17:20.918159152 +0000 UTC m=+21.894025671" lastFinishedPulling="2025-09-13 00:17:23.718703134 +0000 UTC m=+24.694569653" observedRunningTime="2025-09-13 00:17:24.225885171 +0000 UTC m=+25.201751690" watchObservedRunningTime="2025-09-13 00:17:26.364584479 +0000 UTC m=+27.340450998" Sep 13 00:17:28.118225 kubelet[2747]: E0913 00:17:28.118151 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxjn8" podUID="9e101dcf-ec95-4d86-b186-db61bd6330ed" Sep 13 00:17:29.134377 containerd[1557]: time="2025-09-13T00:17:29.134232302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:29.134981 containerd[1557]: time="2025-09-13T00:17:29.134933700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:17:29.136115 containerd[1557]: time="2025-09-13T00:17:29.136076684Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:29.138332 containerd[1557]: time="2025-09-13T00:17:29.138289628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:29.138925 containerd[1557]: time="2025-09-13T00:17:29.138869662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.789215019s" Sep 13 00:17:29.139050 containerd[1557]: time="2025-09-13T00:17:29.138924368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:17:29.140937 containerd[1557]: time="2025-09-13T00:17:29.140893912Z" level=info msg="CreateContainer within sandbox \"b1b1a0eae98d1043eabc53acca7f84cbb1026b7cd0dfabd38e44ebdc7d523bd8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:17:29.151256 containerd[1557]: time="2025-09-13T00:17:29.151220519Z" level=info msg="Container 6877b76405e6b3c759e985e47e202f030a2ad1b303c962a4871759ad347f49a0: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:17:29.160250 containerd[1557]: time="2025-09-13T00:17:29.160217110Z" level=info msg="CreateContainer within sandbox \"b1b1a0eae98d1043eabc53acca7f84cbb1026b7cd0dfabd38e44ebdc7d523bd8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6877b76405e6b3c759e985e47e202f030a2ad1b303c962a4871759ad347f49a0\"" Sep 13 00:17:29.160760 containerd[1557]: time="2025-09-13T00:17:29.160713470Z" level=info msg="StartContainer for \"6877b76405e6b3c759e985e47e202f030a2ad1b303c962a4871759ad347f49a0\"" Sep 13 00:17:29.162390 containerd[1557]: time="2025-09-13T00:17:29.162352254Z" level=info msg="connecting to shim 6877b76405e6b3c759e985e47e202f030a2ad1b303c962a4871759ad347f49a0" address="unix:///run/containerd/s/a4b4f10fe3c3d3a7f1fff558daeb52bc8ab9b10b22c2713f8e903cb6a5ccf2d4" protocol=ttrpc version=3 Sep 13 00:17:29.189321 systemd[1]: Started cri-containerd-6877b76405e6b3c759e985e47e202f030a2ad1b303c962a4871759ad347f49a0.scope - libcontainer container 6877b76405e6b3c759e985e47e202f030a2ad1b303c962a4871759ad347f49a0. Sep 13 00:17:29.237959 containerd[1557]: time="2025-09-13T00:17:29.237798640Z" level=info msg="StartContainer for \"6877b76405e6b3c759e985e47e202f030a2ad1b303c962a4871759ad347f49a0\" returns successfully" Sep 13 00:17:30.118386 kubelet[2747]: E0913 00:17:30.118248 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxjn8" podUID="9e101dcf-ec95-4d86-b186-db61bd6330ed" Sep 13 00:17:30.260798 kubelet[2747]: I0913 00:17:30.260377 2747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:17:30.261023 kubelet[2747]: E0913 00:17:30.261004 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:30.266896 systemd[1]: cri-containerd-6877b76405e6b3c759e985e47e202f030a2ad1b303c962a4871759ad347f49a0.scope: Deactivated successfully. Sep 13 00:17:30.267270 systemd[1]: cri-containerd-6877b76405e6b3c759e985e47e202f030a2ad1b303c962a4871759ad347f49a0.scope: Consumed 614ms CPU time, 175M memory peak, 1.9M read from disk, 171.3M written to disk. Sep 13 00:17:30.269763 containerd[1557]: time="2025-09-13T00:17:30.269247842Z" level=info msg="received exit event container_id:\"6877b76405e6b3c759e985e47e202f030a2ad1b303c962a4871759ad347f49a0\" id:\"6877b76405e6b3c759e985e47e202f030a2ad1b303c962a4871759ad347f49a0\" pid:3527 exited_at:{seconds:1757722650 nanos:268717335}" Sep 13 00:17:30.269763 containerd[1557]: time="2025-09-13T00:17:30.269284924Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6877b76405e6b3c759e985e47e202f030a2ad1b303c962a4871759ad347f49a0\" id:\"6877b76405e6b3c759e985e47e202f030a2ad1b303c962a4871759ad347f49a0\" pid:3527 exited_at:{seconds:1757722650 nanos:268717335}" Sep 13 00:17:30.274066 containerd[1557]: time="2025-09-13T00:17:30.274020148Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:17:30.298844 kubelet[2747]: I0913 00:17:30.298798 2747 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:17:30.304850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6877b76405e6b3c759e985e47e202f030a2ad1b303c962a4871759ad347f49a0-rootfs.mount: Deactivated successfully. Sep 13 00:17:30.358461 systemd[1]: Created slice kubepods-besteffort-pod9249a12e_d069_4512_b81e_37ec9c950ff4.slice - libcontainer container kubepods-besteffort-pod9249a12e_d069_4512_b81e_37ec9c950ff4.slice. Sep 13 00:17:30.362280 kubelet[2747]: E0913 00:17:30.362252 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:30.364743 systemd[1]: Created slice kubepods-besteffort-pod10f255ff_eb3e_4053_b0b8_b528af157243.slice - libcontainer container kubepods-besteffort-pod10f255ff_eb3e_4053_b0b8_b528af157243.slice. Sep 13 00:17:30.430283 kubelet[2747]: I0913 00:17:30.430106 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9249a12e-d069-4512-b81e-37ec9c950ff4-calico-apiserver-certs\") pod \"calico-apiserver-568d757654-spds8\" (UID: \"9249a12e-d069-4512-b81e-37ec9c950ff4\") " pod="calico-apiserver/calico-apiserver-568d757654-spds8" Sep 13 00:17:30.430283 kubelet[2747]: I0913 00:17:30.430181 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10f255ff-eb3e-4053-b0b8-b528af157243-whisker-ca-bundle\") pod \"whisker-5cb844f4c4-j5lmn\" (UID: \"10f255ff-eb3e-4053-b0b8-b528af157243\") " pod="calico-system/whisker-5cb844f4c4-j5lmn" Sep 13 00:17:30.430283 kubelet[2747]: I0913 00:17:30.430230 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/10f255ff-eb3e-4053-b0b8-b528af157243-whisker-backend-key-pair\") pod \"whisker-5cb844f4c4-j5lmn\" (UID: \"10f255ff-eb3e-4053-b0b8-b528af157243\") " pod="calico-system/whisker-5cb844f4c4-j5lmn" Sep 13 00:17:30.430283 kubelet[2747]: I0913 00:17:30.430255 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clwsp\" (UniqueName: \"kubernetes.io/projected/10f255ff-eb3e-4053-b0b8-b528af157243-kube-api-access-clwsp\") pod \"whisker-5cb844f4c4-j5lmn\" (UID: \"10f255ff-eb3e-4053-b0b8-b528af157243\") " pod="calico-system/whisker-5cb844f4c4-j5lmn" Sep 13 00:17:30.462261 kubelet[2747]: I0913 00:17:30.430294 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbqmw\" (UniqueName: \"kubernetes.io/projected/9249a12e-d069-4512-b81e-37ec9c950ff4-kube-api-access-sbqmw\") pod \"calico-apiserver-568d757654-spds8\" (UID: \"9249a12e-d069-4512-b81e-37ec9c950ff4\") " pod="calico-apiserver/calico-apiserver-568d757654-spds8" Sep 13 00:17:30.468935 systemd[1]: Created slice kubepods-besteffort-pode40e6753_fead_4584_935f_903769032081.slice - libcontainer container kubepods-besteffort-pode40e6753_fead_4584_935f_903769032081.slice. Sep 13 00:17:30.475685 systemd[1]: Created slice kubepods-burstable-podfc8c958f_db82_4584_b330_b71768df1e39.slice - libcontainer container kubepods-burstable-podfc8c958f_db82_4584_b330_b71768df1e39.slice. Sep 13 00:17:30.482267 systemd[1]: Created slice kubepods-burstable-podbacbf801_d4d8_426e_9e88_33f253ebce09.slice - libcontainer container kubepods-burstable-podbacbf801_d4d8_426e_9e88_33f253ebce09.slice. Sep 13 00:17:30.494243 systemd[1]: Created slice kubepods-besteffort-poddb42c4bd_fd5b_4a58_a0f9_c81216fe7c89.slice - libcontainer container kubepods-besteffort-poddb42c4bd_fd5b_4a58_a0f9_c81216fe7c89.slice. Sep 13 00:17:30.503639 systemd[1]: Created slice kubepods-besteffort-pod6e6da8c5_880e_4731_8be3_963452525c85.slice - libcontainer container kubepods-besteffort-pod6e6da8c5_880e_4731_8be3_963452525c85.slice. Sep 13 00:17:30.531490 kubelet[2747]: I0913 00:17:30.531431 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8c6v\" (UniqueName: \"kubernetes.io/projected/6e6da8c5-880e-4731-8be3-963452525c85-kube-api-access-f8c6v\") pod \"goldmane-54d579b49d-x8s5r\" (UID: \"6e6da8c5-880e-4731-8be3-963452525c85\") " pod="calico-system/goldmane-54d579b49d-x8s5r" Sep 13 00:17:30.531490 kubelet[2747]: I0913 00:17:30.531493 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6e6da8c5-880e-4731-8be3-963452525c85-goldmane-key-pair\") pod \"goldmane-54d579b49d-x8s5r\" (UID: \"6e6da8c5-880e-4731-8be3-963452525c85\") " pod="calico-system/goldmane-54d579b49d-x8s5r" Sep 13 00:17:30.531490 kubelet[2747]: I0913 00:17:30.531516 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e40e6753-fead-4584-935f-903769032081-tigera-ca-bundle\") pod \"calico-kube-controllers-86779ddf76-kmlm2\" (UID: \"e40e6753-fead-4584-935f-903769032081\") " pod="calico-system/calico-kube-controllers-86779ddf76-kmlm2" Sep 13 00:17:30.531729 kubelet[2747]: I0913 00:17:30.531537 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9mdb\" (UniqueName: \"kubernetes.io/projected/e40e6753-fead-4584-935f-903769032081-kube-api-access-k9mdb\") pod \"calico-kube-controllers-86779ddf76-kmlm2\" (UID: \"e40e6753-fead-4584-935f-903769032081\") " pod="calico-system/calico-kube-controllers-86779ddf76-kmlm2" Sep 13 00:17:30.531729 kubelet[2747]: I0913 00:17:30.531563 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvsgn\" (UniqueName: \"kubernetes.io/projected/fc8c958f-db82-4584-b330-b71768df1e39-kube-api-access-rvsgn\") pod \"coredns-668d6bf9bc-5nzdh\" (UID: \"fc8c958f-db82-4584-b330-b71768df1e39\") " pod="kube-system/coredns-668d6bf9bc-5nzdh" Sep 13 00:17:30.531729 kubelet[2747]: I0913 00:17:30.531582 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e6da8c5-880e-4731-8be3-963452525c85-config\") pod \"goldmane-54d579b49d-x8s5r\" (UID: \"6e6da8c5-880e-4731-8be3-963452525c85\") " pod="calico-system/goldmane-54d579b49d-x8s5r" Sep 13 00:17:30.531729 kubelet[2747]: I0913 00:17:30.531632 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc8c958f-db82-4584-b330-b71768df1e39-config-volume\") pod \"coredns-668d6bf9bc-5nzdh\" (UID: \"fc8c958f-db82-4584-b330-b71768df1e39\") " pod="kube-system/coredns-668d6bf9bc-5nzdh" Sep 13 00:17:30.531729 kubelet[2747]: I0913 00:17:30.531654 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spt8l\" (UniqueName: \"kubernetes.io/projected/bacbf801-d4d8-426e-9e88-33f253ebce09-kube-api-access-spt8l\") pod \"coredns-668d6bf9bc-hnfxw\" (UID: \"bacbf801-d4d8-426e-9e88-33f253ebce09\") " pod="kube-system/coredns-668d6bf9bc-hnfxw" Sep 13 00:17:30.531857 kubelet[2747]: I0913 00:17:30.531686 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjcmn\" (UniqueName: \"kubernetes.io/projected/db42c4bd-fd5b-4a58-a0f9-c81216fe7c89-kube-api-access-jjcmn\") pod \"calico-apiserver-568d757654-xnwhr\" (UID: \"db42c4bd-fd5b-4a58-a0f9-c81216fe7c89\") " pod="calico-apiserver/calico-apiserver-568d757654-xnwhr" Sep 13 00:17:30.531857 kubelet[2747]: I0913 00:17:30.531720 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bacbf801-d4d8-426e-9e88-33f253ebce09-config-volume\") pod \"coredns-668d6bf9bc-hnfxw\" (UID: \"bacbf801-d4d8-426e-9e88-33f253ebce09\") " pod="kube-system/coredns-668d6bf9bc-hnfxw" Sep 13 00:17:30.531857 kubelet[2747]: I0913 00:17:30.531774 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e6da8c5-880e-4731-8be3-963452525c85-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-x8s5r\" (UID: \"6e6da8c5-880e-4731-8be3-963452525c85\") " pod="calico-system/goldmane-54d579b49d-x8s5r" Sep 13 00:17:30.531857 kubelet[2747]: I0913 00:17:30.531793 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/db42c4bd-fd5b-4a58-a0f9-c81216fe7c89-calico-apiserver-certs\") pod \"calico-apiserver-568d757654-xnwhr\" (UID: \"db42c4bd-fd5b-4a58-a0f9-c81216fe7c89\") " pod="calico-apiserver/calico-apiserver-568d757654-xnwhr" Sep 13 00:17:30.757811 containerd[1557]: time="2025-09-13T00:17:30.757763247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568d757654-spds8,Uid:9249a12e-d069-4512-b81e-37ec9c950ff4,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:17:30.757975 containerd[1557]: time="2025-09-13T00:17:30.757782654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cb844f4c4-j5lmn,Uid:10f255ff-eb3e-4053-b0b8-b528af157243,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:30.772972 containerd[1557]: time="2025-09-13T00:17:30.772663233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86779ddf76-kmlm2,Uid:e40e6753-fead-4584-935f-903769032081,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:30.780952 kubelet[2747]: E0913 00:17:30.780902 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:30.783054 containerd[1557]: time="2025-09-13T00:17:30.783014807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5nzdh,Uid:fc8c958f-db82-4584-b330-b71768df1e39,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:30.786440 kubelet[2747]: E0913 00:17:30.786177 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:30.787370 containerd[1557]: time="2025-09-13T00:17:30.787337402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hnfxw,Uid:bacbf801-d4d8-426e-9e88-33f253ebce09,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:30.799843 containerd[1557]: time="2025-09-13T00:17:30.799803799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568d757654-xnwhr,Uid:db42c4bd-fd5b-4a58-a0f9-c81216fe7c89,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:17:30.810951 containerd[1557]: time="2025-09-13T00:17:30.810759492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-x8s5r,Uid:6e6da8c5-880e-4731-8be3-963452525c85,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:30.907089 containerd[1557]: time="2025-09-13T00:17:30.907004647Z" level=error msg="Failed to destroy network for sandbox \"d32d98069e3e5a1a543d4d323f133f79643434abf7f6fb904e46fcc4adb365be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:30.923441 containerd[1557]: time="2025-09-13T00:17:30.909160650Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hnfxw,Uid:bacbf801-d4d8-426e-9e88-33f253ebce09,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d32d98069e3e5a1a543d4d323f133f79643434abf7f6fb904e46fcc4adb365be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:30.924003 containerd[1557]: time="2025-09-13T00:17:30.917053184Z" level=error msg="Failed to destroy network for sandbox \"6bf237db98e051a771178052cb315f327afd42e8f0fe6af6eb209da1c498f5a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:30.926820 containerd[1557]: time="2025-09-13T00:17:30.926778125Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-x8s5r,Uid:6e6da8c5-880e-4731-8be3-963452525c85,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bf237db98e051a771178052cb315f327afd42e8f0fe6af6eb209da1c498f5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:30.939516 containerd[1557]: time="2025-09-13T00:17:30.939390234Z" level=error msg="Failed to destroy network for sandbox \"c71e8e1b1850a0c5df645750d2ae1d505d770f8dfb9f6581d89a422d274aa4cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:30.940823 containerd[1557]: time="2025-09-13T00:17:30.940799621Z" level=error msg="Failed to destroy network for sandbox \"12e74e44eb9d65eada0f11c6048b6a0492f47e2703c178036d207096a384b084\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:30.942187 containerd[1557]: time="2025-09-13T00:17:30.942159272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5nzdh,Uid:fc8c958f-db82-4584-b330-b71768df1e39,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c71e8e1b1850a0c5df645750d2ae1d505d770f8dfb9f6581d89a422d274aa4cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:30.946306 containerd[1557]: time="2025-09-13T00:17:30.946264648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86779ddf76-kmlm2,Uid:e40e6753-fead-4584-935f-903769032081,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"12e74e44eb9d65eada0f11c6048b6a0492f47e2703c178036d207096a384b084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:30.947112 containerd[1557]: time="2025-09-13T00:17:30.947076750Z" level=error msg="Failed to destroy network for sandbox \"4aaa733db87befef54128caecebf92eab4769ebec259c2afb0515c0d411d3ce9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:30.948282 containerd[1557]: time="2025-09-13T00:17:30.948247636Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cb844f4c4-j5lmn,Uid:10f255ff-eb3e-4053-b0b8-b528af157243,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aaa733db87befef54128caecebf92eab4769ebec259c2afb0515c0d411d3ce9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:30.953124 containerd[1557]: time="2025-09-13T00:17:30.953068896Z" level=error msg="Failed to destroy network for sandbox \"e95ef9fd9245d68aaca0cce2923e8b5c40cb55b409e81bacd9a47ffdc8a79b0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:30.954384 containerd[1557]: time="2025-09-13T00:17:30.954320870Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568d757654-xnwhr,Uid:db42c4bd-fd5b-4a58-a0f9-c81216fe7c89,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95ef9fd9245d68aaca0cce2923e8b5c40cb55b409e81bacd9a47ffdc8a79b0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:30.958885 containerd[1557]: time="2025-09-13T00:17:30.958838201Z" level=error msg="Failed to destroy network for sandbox \"a3d34d6309c5397e299f1d15397ed6df9fedd7f3114d5a04454c44bf88eeec8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:30.960509 containerd[1557]: time="2025-09-13T00:17:30.959934514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568d757654-spds8,Uid:9249a12e-d069-4512-b81e-37ec9c950ff4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3d34d6309c5397e299f1d15397ed6df9fedd7f3114d5a04454c44bf88eeec8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:30.960726 kubelet[2747]: E0913 00:17:30.960012 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95ef9fd9245d68aaca0cce2923e8b5c40cb55b409e81bacd9a47ffdc8a79b0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:30.960726 kubelet[2747]: E0913 00:17:30.960040 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d32d98069e3e5a1a543d4d323f133f79643434abf7f6fb904e46fcc4adb365be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:30.960726 kubelet[2747]: E0913 00:17:30.960111 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95ef9fd9245d68aaca0cce2923e8b5c40cb55b409e81bacd9a47ffdc8a79b0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568d757654-xnwhr" Sep 13 00:17:30.960726 kubelet[2747]: E0913 00:17:30.960135 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d32d98069e3e5a1a543d4d323f133f79643434abf7f6fb904e46fcc4adb365be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hnfxw" Sep 13 00:17:30.960881 kubelet[2747]: E0913 00:17:30.960145 2747 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95ef9fd9245d68aaca0cce2923e8b5c40cb55b409e81bacd9a47ffdc8a79b0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568d757654-xnwhr" Sep 13 00:17:30.960881 kubelet[2747]: E0913 00:17:30.960166 2747 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d32d98069e3e5a1a543d4d323f133f79643434abf7f6fb904e46fcc4adb365be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hnfxw" Sep 13 00:17:30.960881 kubelet[2747]: E0913 00:17:30.960152 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3d34d6309c5397e299f1d15397ed6df9fedd7f3114d5a04454c44bf88eeec8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:30.960991 kubelet[2747]: E0913 00:17:30.960212 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-568d757654-xnwhr_calico-apiserver(db42c4bd-fd5b-4a58-a0f9-c81216fe7c89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-568d757654-xnwhr_calico-apiserver(db42c4bd-fd5b-4a58-a0f9-c81216fe7c89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e95ef9fd9245d68aaca0cce2923e8b5c40cb55b409e81bacd9a47ffdc8a79b0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-568d757654-xnwhr" podUID="db42c4bd-fd5b-4a58-a0f9-c81216fe7c89" Sep 13 00:17:30.960991 kubelet[2747]: E0913 00:17:30.960251 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3d34d6309c5397e299f1d15397ed6df9fedd7f3114d5a04454c44bf88eeec8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568d757654-spds8" Sep 13 00:17:30.960991 kubelet[2747]: E0913 00:17:30.960242 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hnfxw_kube-system(bacbf801-d4d8-426e-9e88-33f253ebce09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hnfxw_kube-system(bacbf801-d4d8-426e-9e88-33f253ebce09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d32d98069e3e5a1a543d4d323f133f79643434abf7f6fb904e46fcc4adb365be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hnfxw" podUID="bacbf801-d4d8-426e-9e88-33f253ebce09" Sep 13 00:17:30.961117 kubelet[2747]: E0913 00:17:30.960278 2747 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3d34d6309c5397e299f1d15397ed6df9fedd7f3114d5a04454c44bf88eeec8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568d757654-spds8" Sep 13 00:17:30.961117 kubelet[2747]: E0913 00:17:30.960297 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12e74e44eb9d65eada0f11c6048b6a0492f47e2703c178036d207096a384b084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:30.961117 kubelet[2747]: E0913 00:17:30.960322 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12e74e44eb9d65eada0f11c6048b6a0492f47e2703c178036d207096a384b084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86779ddf76-kmlm2" Sep 13 00:17:30.961224 kubelet[2747]: E0913 00:17:30.960331 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-568d757654-spds8_calico-apiserver(9249a12e-d069-4512-b81e-37ec9c950ff4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-568d757654-spds8_calico-apiserver(9249a12e-d069-4512-b81e-37ec9c950ff4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3d34d6309c5397e299f1d15397ed6df9fedd7f3114d5a04454c44bf88eeec8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-568d757654-spds8" podUID="9249a12e-d069-4512-b81e-37ec9c950ff4" Sep 13 00:17:30.961224 kubelet[2747]: E0913 00:17:30.960340 2747 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12e74e44eb9d65eada0f11c6048b6a0492f47e2703c178036d207096a384b084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86779ddf76-kmlm2" Sep 13 00:17:30.961224 kubelet[2747]: E0913 00:17:30.960365 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bf237db98e051a771178052cb315f327afd42e8f0fe6af6eb209da1c498f5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:30.961329 kubelet[2747]: E0913 00:17:30.960386 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bf237db98e051a771178052cb315f327afd42e8f0fe6af6eb209da1c498f5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-x8s5r" Sep 13 00:17:30.961329 kubelet[2747]: E0913 00:17:30.960388 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86779ddf76-kmlm2_calico-system(e40e6753-fead-4584-935f-903769032081)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86779ddf76-kmlm2_calico-system(e40e6753-fead-4584-935f-903769032081)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12e74e44eb9d65eada0f11c6048b6a0492f47e2703c178036d207096a384b084\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86779ddf76-kmlm2" podUID="e40e6753-fead-4584-935f-903769032081" Sep 13 00:17:30.961329 kubelet[2747]: E0913 00:17:30.960399 2747 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bf237db98e051a771178052cb315f327afd42e8f0fe6af6eb209da1c498f5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-x8s5r" Sep 13 00:17:30.961439 kubelet[2747]: E0913 00:17:30.960427 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c71e8e1b1850a0c5df645750d2ae1d505d770f8dfb9f6581d89a422d274aa4cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:30.961439 kubelet[2747]: E0913 00:17:30.960428 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-x8s5r_calico-system(6e6da8c5-880e-4731-8be3-963452525c85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-x8s5r_calico-system(6e6da8c5-880e-4731-8be3-963452525c85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6bf237db98e051a771178052cb315f327afd42e8f0fe6af6eb209da1c498f5a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-x8s5r" podUID="6e6da8c5-880e-4731-8be3-963452525c85" Sep 13 00:17:30.961439 kubelet[2747]: E0913 00:17:30.960448 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c71e8e1b1850a0c5df645750d2ae1d505d770f8dfb9f6581d89a422d274aa4cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5nzdh" Sep 13 00:17:30.961439 kubelet[2747]: E0913 00:17:30.960461 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aaa733db87befef54128caecebf92eab4769ebec259c2afb0515c0d411d3ce9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:30.961565 kubelet[2747]: E0913 00:17:30.960487 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aaa733db87befef54128caecebf92eab4769ebec259c2afb0515c0d411d3ce9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cb844f4c4-j5lmn" Sep 13 00:17:30.961565 kubelet[2747]: E0913 00:17:30.960512 2747 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aaa733db87befef54128caecebf92eab4769ebec259c2afb0515c0d411d3ce9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cb844f4c4-j5lmn" Sep 13 00:17:30.961565 kubelet[2747]: E0913 00:17:30.960552 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5cb844f4c4-j5lmn_calico-system(10f255ff-eb3e-4053-b0b8-b528af157243)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5cb844f4c4-j5lmn_calico-system(10f255ff-eb3e-4053-b0b8-b528af157243)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4aaa733db87befef54128caecebf92eab4769ebec259c2afb0515c0d411d3ce9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5cb844f4c4-j5lmn" podUID="10f255ff-eb3e-4053-b0b8-b528af157243" Sep 13 00:17:30.961709 kubelet[2747]: E0913 00:17:30.960463 2747 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c71e8e1b1850a0c5df645750d2ae1d505d770f8dfb9f6581d89a422d274aa4cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5nzdh" Sep 13 00:17:30.961849 kubelet[2747]: E0913 00:17:30.961736 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5nzdh_kube-system(fc8c958f-db82-4584-b330-b71768df1e39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5nzdh_kube-system(fc8c958f-db82-4584-b330-b71768df1e39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c71e8e1b1850a0c5df645750d2ae1d505d770f8dfb9f6581d89a422d274aa4cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5nzdh" podUID="fc8c958f-db82-4584-b330-b71768df1e39" Sep 13 00:17:31.371050 containerd[1557]: time="2025-09-13T00:17:31.370989244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:17:32.126005 systemd[1]: Created slice kubepods-besteffort-pod9e101dcf_ec95_4d86_b186_db61bd6330ed.slice - libcontainer container kubepods-besteffort-pod9e101dcf_ec95_4d86_b186_db61bd6330ed.slice. Sep 13 00:17:32.129149 containerd[1557]: time="2025-09-13T00:17:32.129098625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxjn8,Uid:9e101dcf-ec95-4d86-b186-db61bd6330ed,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:32.190676 containerd[1557]: time="2025-09-13T00:17:32.190606949Z" level=error msg="Failed to destroy network for sandbox \"738b101090677bdbddab3264f0d5647c432751edb27023cc1c97aff8dd812e5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:32.192461 containerd[1557]: time="2025-09-13T00:17:32.192378354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxjn8,Uid:9e101dcf-ec95-4d86-b186-db61bd6330ed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"738b101090677bdbddab3264f0d5647c432751edb27023cc1c97aff8dd812e5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:32.192780 kubelet[2747]: E0913 00:17:32.192719 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"738b101090677bdbddab3264f0d5647c432751edb27023cc1c97aff8dd812e5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:32.193177 kubelet[2747]: E0913 00:17:32.192814 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"738b101090677bdbddab3264f0d5647c432751edb27023cc1c97aff8dd812e5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wxjn8" Sep 13 00:17:32.193177 kubelet[2747]: E0913 00:17:32.192844 2747 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"738b101090677bdbddab3264f0d5647c432751edb27023cc1c97aff8dd812e5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wxjn8" Sep 13 00:17:32.193177 kubelet[2747]: E0913 00:17:32.192946 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wxjn8_calico-system(9e101dcf-ec95-4d86-b186-db61bd6330ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wxjn8_calico-system(9e101dcf-ec95-4d86-b186-db61bd6330ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"738b101090677bdbddab3264f0d5647c432751edb27023cc1c97aff8dd812e5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wxjn8" podUID="9e101dcf-ec95-4d86-b186-db61bd6330ed" Sep 13 00:17:32.193418 systemd[1]: run-netns-cni\x2d20efb578\x2d4a1a\x2d992b\x2d67e6\x2df210359ecb5b.mount: Deactivated successfully. Sep 13 00:17:41.485254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2377399003.mount: Deactivated successfully. Sep 13 00:17:42.120943 containerd[1557]: time="2025-09-13T00:17:42.120856444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568d757654-xnwhr,Uid:db42c4bd-fd5b-4a58-a0f9-c81216fe7c89,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:17:42.121668 kubelet[2747]: E0913 00:17:42.121047 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:42.122128 containerd[1557]: time="2025-09-13T00:17:42.121688597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5nzdh,Uid:fc8c958f-db82-4584-b330-b71768df1e39,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:42.432538 containerd[1557]: time="2025-09-13T00:17:42.432328760Z" level=error msg="Failed to destroy network for sandbox \"e6cc374bf60271d154ef7186098e8887358dcf3111a75a4e5914efeeddf2ee67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:42.437794 systemd[1]: run-netns-cni\x2d1731d291\x2d2e53\x2d2958\x2d73c6\x2d0d899051973e.mount: Deactivated successfully. Sep 13 00:17:42.489350 containerd[1557]: time="2025-09-13T00:17:42.489264080Z" level=error msg="Failed to destroy network for sandbox \"299a1dc5580b541e65811ef1607bbdfa0ff60c64a3f3c5be020eea4a35622a0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:42.492151 systemd[1]: run-netns-cni\x2d79752824\x2d4fca\x2dfbf6\x2d08d9\x2dbf5bb43c6c1e.mount: Deactivated successfully. Sep 13 00:17:42.541967 containerd[1557]: time="2025-09-13T00:17:42.541859169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:42.657169 containerd[1557]: time="2025-09-13T00:17:42.656856888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568d757654-xnwhr,Uid:db42c4bd-fd5b-4a58-a0f9-c81216fe7c89,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6cc374bf60271d154ef7186098e8887358dcf3111a75a4e5914efeeddf2ee67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:42.657389 kubelet[2747]: E0913 00:17:42.657319 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6cc374bf60271d154ef7186098e8887358dcf3111a75a4e5914efeeddf2ee67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:42.657444 kubelet[2747]: E0913 00:17:42.657396 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6cc374bf60271d154ef7186098e8887358dcf3111a75a4e5914efeeddf2ee67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568d757654-xnwhr" Sep 13 00:17:42.657444 kubelet[2747]: E0913 00:17:42.657421 2747 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6cc374bf60271d154ef7186098e8887358dcf3111a75a4e5914efeeddf2ee67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568d757654-xnwhr" Sep 13 00:17:42.657531 kubelet[2747]: E0913 00:17:42.657469 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-568d757654-xnwhr_calico-apiserver(db42c4bd-fd5b-4a58-a0f9-c81216fe7c89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-568d757654-xnwhr_calico-apiserver(db42c4bd-fd5b-4a58-a0f9-c81216fe7c89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6cc374bf60271d154ef7186098e8887358dcf3111a75a4e5914efeeddf2ee67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-568d757654-xnwhr" podUID="db42c4bd-fd5b-4a58-a0f9-c81216fe7c89" Sep 13 00:17:42.766068 containerd[1557]: time="2025-09-13T00:17:42.765966475Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5nzdh,Uid:fc8c958f-db82-4584-b330-b71768df1e39,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"299a1dc5580b541e65811ef1607bbdfa0ff60c64a3f3c5be020eea4a35622a0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:42.766350 kubelet[2747]: E0913 00:17:42.766292 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"299a1dc5580b541e65811ef1607bbdfa0ff60c64a3f3c5be020eea4a35622a0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:42.766432 kubelet[2747]: E0913 00:17:42.766372 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"299a1dc5580b541e65811ef1607bbdfa0ff60c64a3f3c5be020eea4a35622a0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5nzdh" Sep 13 00:17:42.766432 kubelet[2747]: E0913 00:17:42.766395 2747 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"299a1dc5580b541e65811ef1607bbdfa0ff60c64a3f3c5be020eea4a35622a0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5nzdh" Sep 13 00:17:42.766506 kubelet[2747]: E0913 00:17:42.766444 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5nzdh_kube-system(fc8c958f-db82-4584-b330-b71768df1e39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5nzdh_kube-system(fc8c958f-db82-4584-b330-b71768df1e39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"299a1dc5580b541e65811ef1607bbdfa0ff60c64a3f3c5be020eea4a35622a0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5nzdh" podUID="fc8c958f-db82-4584-b330-b71768df1e39" Sep 13 00:17:42.809291 containerd[1557]: time="2025-09-13T00:17:42.809220259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:17:42.941297 containerd[1557]: time="2025-09-13T00:17:42.941225703Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:43.037658 containerd[1557]: time="2025-09-13T00:17:43.037476364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:43.038283 containerd[1557]: time="2025-09-13T00:17:43.038252809Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 11.667215451s" Sep 13 00:17:43.038448 containerd[1557]: time="2025-09-13T00:17:43.038288758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:17:43.068996 containerd[1557]: time="2025-09-13T00:17:43.068942895Z" level=info msg="CreateContainer within sandbox \"b1b1a0eae98d1043eabc53acca7f84cbb1026b7cd0dfabd38e44ebdc7d523bd8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:17:43.119509 containerd[1557]: time="2025-09-13T00:17:43.119455888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568d757654-spds8,Uid:9249a12e-d069-4512-b81e-37ec9c950ff4,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:17:43.633705 containerd[1557]: time="2025-09-13T00:17:43.633622888Z" level=error msg="Failed to destroy network for sandbox \"335f1ecfb0798d23efe94e4acb41e9548d89a720866e33d2b22d5e7e923edb90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:43.636188 systemd[1]: run-netns-cni\x2da30c3bd7\x2d027c\x2dd27e\x2da949\x2dfc7280be4045.mount: Deactivated successfully. Sep 13 00:17:43.885672 containerd[1557]: time="2025-09-13T00:17:43.884891258Z" level=info msg="Container 2b8d53b6a4de75e5ff052907602d68e4544cd780c6dfc34b26f95e29fa3fcecd: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:17:44.045995 containerd[1557]: time="2025-09-13T00:17:44.045886150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568d757654-spds8,Uid:9249a12e-d069-4512-b81e-37ec9c950ff4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"335f1ecfb0798d23efe94e4acb41e9548d89a720866e33d2b22d5e7e923edb90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:44.046356 kubelet[2747]: E0913 00:17:44.046227 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"335f1ecfb0798d23efe94e4acb41e9548d89a720866e33d2b22d5e7e923edb90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:44.046356 kubelet[2747]: E0913 00:17:44.046302 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"335f1ecfb0798d23efe94e4acb41e9548d89a720866e33d2b22d5e7e923edb90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568d757654-spds8" Sep 13 00:17:44.046356 kubelet[2747]: E0913 00:17:44.046326 2747 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"335f1ecfb0798d23efe94e4acb41e9548d89a720866e33d2b22d5e7e923edb90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568d757654-spds8" Sep 13 00:17:44.046770 kubelet[2747]: E0913 00:17:44.046376 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-568d757654-spds8_calico-apiserver(9249a12e-d069-4512-b81e-37ec9c950ff4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-568d757654-spds8_calico-apiserver(9249a12e-d069-4512-b81e-37ec9c950ff4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"335f1ecfb0798d23efe94e4acb41e9548d89a720866e33d2b22d5e7e923edb90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-568d757654-spds8" podUID="9249a12e-d069-4512-b81e-37ec9c950ff4" Sep 13 00:17:44.118654 kubelet[2747]: E0913 00:17:44.118584 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:44.119299 containerd[1557]: time="2025-09-13T00:17:44.119233597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hnfxw,Uid:bacbf801-d4d8-426e-9e88-33f253ebce09,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:45.122934 containerd[1557]: time="2025-09-13T00:17:45.121139741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cb844f4c4-j5lmn,Uid:10f255ff-eb3e-4053-b0b8-b528af157243,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:45.123401 containerd[1557]: time="2025-09-13T00:17:45.121139771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-x8s5r,Uid:6e6da8c5-880e-4731-8be3-963452525c85,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:45.123401 containerd[1557]: time="2025-09-13T00:17:45.123119873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86779ddf76-kmlm2,Uid:e40e6753-fead-4584-935f-903769032081,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:45.261774 containerd[1557]: time="2025-09-13T00:17:45.261699013Z" level=error msg="Failed to destroy network for sandbox \"7fd67c9206d348865ff25bea035f02c4169913cca03984a5d4f8e1b976bbc597\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:45.264892 systemd[1]: run-netns-cni\x2d3b81bcbc\x2db71e\x2debe1\x2d10a4\x2dc776772f49f2.mount: Deactivated successfully. Sep 13 00:17:45.318770 containerd[1557]: time="2025-09-13T00:17:45.318717626Z" level=info msg="CreateContainer within sandbox \"b1b1a0eae98d1043eabc53acca7f84cbb1026b7cd0dfabd38e44ebdc7d523bd8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2b8d53b6a4de75e5ff052907602d68e4544cd780c6dfc34b26f95e29fa3fcecd\"" Sep 13 00:17:45.322814 containerd[1557]: time="2025-09-13T00:17:45.322758025Z" level=info msg="StartContainer for \"2b8d53b6a4de75e5ff052907602d68e4544cd780c6dfc34b26f95e29fa3fcecd\"" Sep 13 00:17:45.324796 containerd[1557]: time="2025-09-13T00:17:45.324762313Z" level=info msg="connecting to shim 2b8d53b6a4de75e5ff052907602d68e4544cd780c6dfc34b26f95e29fa3fcecd" address="unix:///run/containerd/s/a4b4f10fe3c3d3a7f1fff558daeb52bc8ab9b10b22c2713f8e903cb6a5ccf2d4" protocol=ttrpc version=3 Sep 13 00:17:45.358147 systemd[1]: Started cri-containerd-2b8d53b6a4de75e5ff052907602d68e4544cd780c6dfc34b26f95e29fa3fcecd.scope - libcontainer container 2b8d53b6a4de75e5ff052907602d68e4544cd780c6dfc34b26f95e29fa3fcecd. Sep 13 00:17:45.810542 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:17:45.811736 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:17:45.890850 containerd[1557]: time="2025-09-13T00:17:45.890796212Z" level=info msg="StartContainer for \"2b8d53b6a4de75e5ff052907602d68e4544cd780c6dfc34b26f95e29fa3fcecd\" returns successfully" Sep 13 00:17:46.121023 containerd[1557]: time="2025-09-13T00:17:46.120117459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxjn8,Uid:9e101dcf-ec95-4d86-b186-db61bd6330ed,Namespace:calico-system,Attempt:0,}" Sep 13 00:17:47.414752 containerd[1557]: time="2025-09-13T00:17:47.414577141Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b8d53b6a4de75e5ff052907602d68e4544cd780c6dfc34b26f95e29fa3fcecd\" id:\"2bc0abc6b1251b30521739c988a7b13334eb35b6b52ae753b2c77b577f77e89c\" pid:4035 exit_status:1 exited_at:{seconds:1757722667 nanos:413736826}" Sep 13 00:17:47.772469 containerd[1557]: time="2025-09-13T00:17:47.772372713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hnfxw,Uid:bacbf801-d4d8-426e-9e88-33f253ebce09,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fd67c9206d348865ff25bea035f02c4169913cca03984a5d4f8e1b976bbc597\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.773154 kubelet[2747]: E0913 00:17:47.772761 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fd67c9206d348865ff25bea035f02c4169913cca03984a5d4f8e1b976bbc597\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:17:47.773154 kubelet[2747]: E0913 00:17:47.772846 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fd67c9206d348865ff25bea035f02c4169913cca03984a5d4f8e1b976bbc597\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hnfxw" Sep 13 00:17:47.773154 kubelet[2747]: E0913 00:17:47.772874 2747 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fd67c9206d348865ff25bea035f02c4169913cca03984a5d4f8e1b976bbc597\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hnfxw" Sep 13 00:17:47.773620 kubelet[2747]: E0913 00:17:47.772963 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hnfxw_kube-system(bacbf801-d4d8-426e-9e88-33f253ebce09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hnfxw_kube-system(bacbf801-d4d8-426e-9e88-33f253ebce09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fd67c9206d348865ff25bea035f02c4169913cca03984a5d4f8e1b976bbc597\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hnfxw" podUID="bacbf801-d4d8-426e-9e88-33f253ebce09" Sep 13 00:17:47.998395 containerd[1557]: time="2025-09-13T00:17:47.998331549Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b8d53b6a4de75e5ff052907602d68e4544cd780c6dfc34b26f95e29fa3fcecd\" id:\"8425eda2af9abe3492d0c93955c20d4508f3a3db5ddb698fb0ad927afb544292\" pid:4059 exit_status:1 exited_at:{seconds:1757722667 nanos:997946137}" Sep 13 00:17:48.384863 kubelet[2747]: I0913 00:17:48.384735 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vmn7t" podStartSLOduration=6.716417019 podStartE2EDuration="28.3817556s" podCreationTimestamp="2025-09-13 00:17:20 +0000 UTC" firstStartedPulling="2025-09-13 00:17:21.373763464 +0000 UTC m=+22.349629983" lastFinishedPulling="2025-09-13 00:17:43.039102045 +0000 UTC m=+44.014968564" observedRunningTime="2025-09-13 00:17:48.378409272 +0000 UTC m=+49.354275811" watchObservedRunningTime="2025-09-13 00:17:48.3817556 +0000 UTC m=+49.357622119" Sep 13 00:17:48.674155 systemd-networkd[1459]: cali215e0c78b82: Link UP Sep 13 00:17:48.674416 systemd-networkd[1459]: cali215e0c78b82: Gained carrier Sep 13 00:17:48.688494 containerd[1557]: 2025-09-13 00:17:48.445 [INFO][4125] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:17:48.688494 containerd[1557]: 2025-09-13 00:17:48.477 [INFO][4125] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wxjn8-eth0 csi-node-driver- calico-system 9e101dcf-ec95-4d86-b186-db61bd6330ed 747 0 2025-09-13 00:17:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wxjn8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali215e0c78b82 [] [] }} ContainerID="f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f" Namespace="calico-system" Pod="csi-node-driver-wxjn8" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxjn8-" Sep 13 00:17:48.688494 containerd[1557]: 2025-09-13 00:17:48.477 [INFO][4125] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f" Namespace="calico-system" Pod="csi-node-driver-wxjn8" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxjn8-eth0" Sep 13 00:17:48.688494 containerd[1557]: 2025-09-13 00:17:48.566 [INFO][4156] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f" HandleID="k8s-pod-network.f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f" Workload="localhost-k8s-csi--node--driver--wxjn8-eth0" Sep 13 00:17:48.689006 containerd[1557]: 2025-09-13 00:17:48.568 [INFO][4156] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f" HandleID="k8s-pod-network.f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f" Workload="localhost-k8s-csi--node--driver--wxjn8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005854d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wxjn8", "timestamp":"2025-09-13 00:17:48.566480963 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:17:48.689006 containerd[1557]: 2025-09-13 00:17:48.568 [INFO][4156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:48.689006 containerd[1557]: 2025-09-13 00:17:48.569 [INFO][4156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:48.689006 containerd[1557]: 2025-09-13 00:17:48.570 [INFO][4156] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:17:48.689006 containerd[1557]: 2025-09-13 00:17:48.590 [INFO][4156] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f" host="localhost" Sep 13 00:17:48.689006 containerd[1557]: 2025-09-13 00:17:48.631 [INFO][4156] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:17:48.689006 containerd[1557]: 2025-09-13 00:17:48.638 [INFO][4156] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:17:48.689006 containerd[1557]: 2025-09-13 00:17:48.640 [INFO][4156] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:48.689006 containerd[1557]: 2025-09-13 00:17:48.644 [INFO][4156] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:48.689006 containerd[1557]: 2025-09-13 00:17:48.644 [INFO][4156] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f" host="localhost" Sep 13 00:17:48.689265 containerd[1557]: 2025-09-13 00:17:48.646 [INFO][4156] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f Sep 13 00:17:48.689265 containerd[1557]: 2025-09-13 00:17:48.651 [INFO][4156] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f" host="localhost" Sep 13 00:17:48.689265 containerd[1557]: 2025-09-13 00:17:48.657 [INFO][4156] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f" host="localhost" Sep 13 00:17:48.689265 containerd[1557]: 2025-09-13 00:17:48.657 [INFO][4156] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f" host="localhost" Sep 13 00:17:48.689265 containerd[1557]: 2025-09-13 00:17:48.657 [INFO][4156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:48.689265 containerd[1557]: 2025-09-13 00:17:48.658 [INFO][4156] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f" HandleID="k8s-pod-network.f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f" Workload="localhost-k8s-csi--node--driver--wxjn8-eth0" Sep 13 00:17:48.689414 containerd[1557]: 2025-09-13 00:17:48.661 [INFO][4125] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f" Namespace="calico-system" Pod="csi-node-driver-wxjn8" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxjn8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wxjn8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e101dcf-ec95-4d86-b186-db61bd6330ed", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wxjn8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali215e0c78b82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:48.689478 containerd[1557]: 2025-09-13 00:17:48.662 [INFO][4125] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f" Namespace="calico-system" Pod="csi-node-driver-wxjn8" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxjn8-eth0" Sep 13 00:17:48.689478 containerd[1557]: 2025-09-13 00:17:48.662 [INFO][4125] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali215e0c78b82 ContainerID="f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f" Namespace="calico-system" Pod="csi-node-driver-wxjn8" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxjn8-eth0" Sep 13 00:17:48.689478 containerd[1557]: 2025-09-13 00:17:48.674 [INFO][4125] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f" Namespace="calico-system" Pod="csi-node-driver-wxjn8" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxjn8-eth0" Sep 13 00:17:48.689570 containerd[1557]: 2025-09-13 00:17:48.675 [INFO][4125] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f" Namespace="calico-system" Pod="csi-node-driver-wxjn8" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxjn8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wxjn8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e101dcf-ec95-4d86-b186-db61bd6330ed", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f", Pod:"csi-node-driver-wxjn8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali215e0c78b82", MAC:"fa:e3:df:87:71:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:48.689633 containerd[1557]: 2025-09-13 00:17:48.684 [INFO][4125] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f" Namespace="calico-system" Pod="csi-node-driver-wxjn8" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxjn8-eth0" Sep 13 00:17:48.766637 systemd-networkd[1459]: cali3ab8809dc4c: Link UP Sep 13 00:17:48.768270 systemd-networkd[1459]: cali3ab8809dc4c: Gained carrier Sep 13 00:17:48.787261 containerd[1557]: 2025-09-13 00:17:48.401 [INFO][4108] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:17:48.787261 containerd[1557]: 2025-09-13 00:17:48.434 [INFO][4108] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--86779ddf76--kmlm2-eth0 calico-kube-controllers-86779ddf76- calico-system e40e6753-fead-4584-935f-903769032081 870 0 2025-09-13 00:17:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86779ddf76 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-86779ddf76-kmlm2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3ab8809dc4c [] [] }} ContainerID="3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c" Namespace="calico-system" Pod="calico-kube-controllers-86779ddf76-kmlm2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86779ddf76--kmlm2-" Sep 13 00:17:48.787261 containerd[1557]: 2025-09-13 00:17:48.434 [INFO][4108] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c" Namespace="calico-system" Pod="calico-kube-controllers-86779ddf76-kmlm2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86779ddf76--kmlm2-eth0" Sep 13 00:17:48.787261 containerd[1557]: 2025-09-13 00:17:48.566 [INFO][4140] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c" HandleID="k8s-pod-network.3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c" Workload="localhost-k8s-calico--kube--controllers--86779ddf76--kmlm2-eth0" Sep 13 00:17:48.787501 containerd[1557]: 2025-09-13 00:17:48.569 [INFO][4140] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c" HandleID="k8s-pod-network.3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c" Workload="localhost-k8s-calico--kube--controllers--86779ddf76--kmlm2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011f650), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-86779ddf76-kmlm2", "timestamp":"2025-09-13 00:17:48.566556064 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:17:48.787501 containerd[1557]: 2025-09-13 00:17:48.569 [INFO][4140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:48.787501 containerd[1557]: 2025-09-13 00:17:48.658 [INFO][4140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:48.787501 containerd[1557]: 2025-09-13 00:17:48.658 [INFO][4140] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:17:48.787501 containerd[1557]: 2025-09-13 00:17:48.693 [INFO][4140] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c" host="localhost" Sep 13 00:17:48.787501 containerd[1557]: 2025-09-13 00:17:48.731 [INFO][4140] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:17:48.787501 containerd[1557]: 2025-09-13 00:17:48.738 [INFO][4140] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:17:48.787501 containerd[1557]: 2025-09-13 00:17:48.740 [INFO][4140] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:48.787501 containerd[1557]: 2025-09-13 00:17:48.743 [INFO][4140] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:48.787501 containerd[1557]: 2025-09-13 00:17:48.743 [INFO][4140] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c" host="localhost" Sep 13 00:17:48.787990 containerd[1557]: 2025-09-13 00:17:48.745 [INFO][4140] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c Sep 13 00:17:48.787990 containerd[1557]: 2025-09-13 00:17:48.749 [INFO][4140] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c" host="localhost" Sep 13 00:17:48.787990 containerd[1557]: 2025-09-13 00:17:48.756 [INFO][4140] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c" host="localhost" Sep 13 00:17:48.787990 containerd[1557]: 2025-09-13 00:17:48.756 [INFO][4140] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c" host="localhost" Sep 13 00:17:48.787990 containerd[1557]: 2025-09-13 00:17:48.758 [INFO][4140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:48.787990 containerd[1557]: 2025-09-13 00:17:48.758 [INFO][4140] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c" HandleID="k8s-pod-network.3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c" Workload="localhost-k8s-calico--kube--controllers--86779ddf76--kmlm2-eth0" Sep 13 00:17:48.788331 containerd[1557]: 2025-09-13 00:17:48.763 [INFO][4108] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c" Namespace="calico-system" Pod="calico-kube-controllers-86779ddf76-kmlm2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86779ddf76--kmlm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86779ddf76--kmlm2-eth0", GenerateName:"calico-kube-controllers-86779ddf76-", Namespace:"calico-system", SelfLink:"", UID:"e40e6753-fead-4584-935f-903769032081", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86779ddf76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-86779ddf76-kmlm2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3ab8809dc4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:48.788442 containerd[1557]: 2025-09-13 00:17:48.763 [INFO][4108] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c" Namespace="calico-system" Pod="calico-kube-controllers-86779ddf76-kmlm2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86779ddf76--kmlm2-eth0" Sep 13 00:17:48.788442 containerd[1557]: 2025-09-13 00:17:48.763 [INFO][4108] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ab8809dc4c ContainerID="3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c" Namespace="calico-system" Pod="calico-kube-controllers-86779ddf76-kmlm2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86779ddf76--kmlm2-eth0" Sep 13 00:17:48.788442 containerd[1557]: 2025-09-13 00:17:48.769 [INFO][4108] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c" Namespace="calico-system" Pod="calico-kube-controllers-86779ddf76-kmlm2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86779ddf76--kmlm2-eth0" Sep 13 00:17:48.788612 containerd[1557]: 2025-09-13 00:17:48.769 [INFO][4108] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c" Namespace="calico-system" Pod="calico-kube-controllers-86779ddf76-kmlm2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86779ddf76--kmlm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86779ddf76--kmlm2-eth0", GenerateName:"calico-kube-controllers-86779ddf76-", Namespace:"calico-system", SelfLink:"", UID:"e40e6753-fead-4584-935f-903769032081", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86779ddf76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c", Pod:"calico-kube-controllers-86779ddf76-kmlm2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3ab8809dc4c", MAC:"d2:62:b6:10:06:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:48.788739 containerd[1557]: 2025-09-13 00:17:48.781 [INFO][4108] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c" Namespace="calico-system" Pod="calico-kube-controllers-86779ddf76-kmlm2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86779ddf76--kmlm2-eth0" Sep 13 00:17:48.828586 containerd[1557]: time="2025-09-13T00:17:48.828484904Z" level=info msg="connecting to shim f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f" address="unix:///run/containerd/s/a2ebcdd955134c1fed9b7fca06fe4fac261fc8717c45af2df82a7bd7bad1a221" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:17:48.862048 systemd[1]: Started cri-containerd-f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f.scope - libcontainer container f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f. Sep 13 00:17:48.876972 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:17:48.918498 systemd-networkd[1459]: cali33a119e4e64: Link UP Sep 13 00:17:48.918815 systemd-networkd[1459]: cali33a119e4e64: Gained carrier Sep 13 00:17:49.085781 containerd[1557]: time="2025-09-13T00:17:49.085698014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxjn8,Uid:9e101dcf-ec95-4d86-b186-db61bd6330ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f\"" Sep 13 00:17:49.094735 containerd[1557]: time="2025-09-13T00:17:49.091903134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:17:49.165120 containerd[1557]: 2025-09-13 00:17:48.112 [INFO][4077] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:17:49.165120 containerd[1557]: 2025-09-13 00:17:48.419 [INFO][4077] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--x8s5r-eth0 goldmane-54d579b49d- calico-system 6e6da8c5-880e-4731-8be3-963452525c85 868 0 2025-09-13 00:17:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-x8s5r eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali33a119e4e64 [] [] }} ContainerID="42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b" Namespace="calico-system" Pod="goldmane-54d579b49d-x8s5r" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x8s5r-" Sep 13 00:17:49.165120 containerd[1557]: 2025-09-13 00:17:48.419 [INFO][4077] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b" Namespace="calico-system" Pod="goldmane-54d579b49d-x8s5r" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x8s5r-eth0" Sep 13 00:17:49.165120 containerd[1557]: 2025-09-13 00:17:48.566 [INFO][4138] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b" HandleID="k8s-pod-network.42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b" Workload="localhost-k8s-goldmane--54d579b49d--x8s5r-eth0" Sep 13 00:17:49.165680 containerd[1557]: 2025-09-13 00:17:48.568 [INFO][4138] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b" HandleID="k8s-pod-network.42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b" Workload="localhost-k8s-goldmane--54d579b49d--x8s5r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f650), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-x8s5r", "timestamp":"2025-09-13 00:17:48.566270248 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:17:49.165680 containerd[1557]: 2025-09-13 00:17:48.568 [INFO][4138] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:49.165680 containerd[1557]: 2025-09-13 00:17:48.756 [INFO][4138] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:49.165680 containerd[1557]: 2025-09-13 00:17:48.756 [INFO][4138] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:17:49.165680 containerd[1557]: 2025-09-13 00:17:48.797 [INFO][4138] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b" host="localhost" Sep 13 00:17:49.165680 containerd[1557]: 2025-09-13 00:17:48.832 [INFO][4138] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:17:49.165680 containerd[1557]: 2025-09-13 00:17:48.839 [INFO][4138] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:17:49.165680 containerd[1557]: 2025-09-13 00:17:48.841 [INFO][4138] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:49.165680 containerd[1557]: 2025-09-13 00:17:48.843 [INFO][4138] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:49.165680 containerd[1557]: 2025-09-13 00:17:48.843 [INFO][4138] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b" host="localhost" Sep 13 00:17:49.166171 containerd[1557]: 2025-09-13 00:17:48.845 [INFO][4138] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b Sep 13 00:17:49.166171 containerd[1557]: 2025-09-13 00:17:48.903 [INFO][4138] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b" host="localhost" Sep 13 00:17:49.166171 containerd[1557]: 2025-09-13 00:17:48.911 [INFO][4138] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b" host="localhost" Sep 13 00:17:49.166171 containerd[1557]: 2025-09-13 00:17:48.911 [INFO][4138] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b" host="localhost" Sep 13 00:17:49.166171 containerd[1557]: 2025-09-13 00:17:48.911 [INFO][4138] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:49.166171 containerd[1557]: 2025-09-13 00:17:48.911 [INFO][4138] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b" HandleID="k8s-pod-network.42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b" Workload="localhost-k8s-goldmane--54d579b49d--x8s5r-eth0" Sep 13 00:17:49.166326 containerd[1557]: 2025-09-13 00:17:48.915 [INFO][4077] cni-plugin/k8s.go 418: Populated endpoint ContainerID="42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b" Namespace="calico-system" Pod="goldmane-54d579b49d-x8s5r" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x8s5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--x8s5r-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6e6da8c5-880e-4731-8be3-963452525c85", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-x8s5r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali33a119e4e64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:49.166326 containerd[1557]: 2025-09-13 00:17:48.915 [INFO][4077] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b" Namespace="calico-system" Pod="goldmane-54d579b49d-x8s5r" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x8s5r-eth0" Sep 13 00:17:49.166529 containerd[1557]: 2025-09-13 00:17:48.915 [INFO][4077] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33a119e4e64 ContainerID="42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b" Namespace="calico-system" Pod="goldmane-54d579b49d-x8s5r" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x8s5r-eth0" Sep 13 00:17:49.166529 containerd[1557]: 2025-09-13 00:17:48.918 [INFO][4077] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b" Namespace="calico-system" Pod="goldmane-54d579b49d-x8s5r" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x8s5r-eth0" Sep 13 00:17:49.166597 containerd[1557]: 2025-09-13 00:17:48.918 [INFO][4077] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b" Namespace="calico-system" Pod="goldmane-54d579b49d-x8s5r" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x8s5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--x8s5r-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6e6da8c5-880e-4731-8be3-963452525c85", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b", Pod:"goldmane-54d579b49d-x8s5r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali33a119e4e64", MAC:"0a:12:e4:b6:c4:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:49.166667 containerd[1557]: 2025-09-13 00:17:49.160 [INFO][4077] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b" Namespace="calico-system" Pod="goldmane-54d579b49d-x8s5r" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x8s5r-eth0" Sep 13 00:17:49.655288 systemd[1]: Started sshd@9-10.0.0.20:22-10.0.0.1:60332.service - OpenSSH per-connection server daemon (10.0.0.1:60332). Sep 13 00:17:49.757872 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 60332 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:17:49.759565 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:49.768138 systemd-logind[1543]: New session 10 of user core. Sep 13 00:17:49.776186 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:17:49.947121 systemd-networkd[1459]: cali3ab8809dc4c: Gained IPv6LL Sep 13 00:17:50.062988 systemd-networkd[1459]: cali8a6915932d0: Link UP Sep 13 00:17:50.064484 systemd-networkd[1459]: cali8a6915932d0: Gained carrier Sep 13 00:17:50.261195 containerd[1557]: 2025-09-13 00:17:48.109 [INFO][4071] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:17:50.261195 containerd[1557]: 2025-09-13 00:17:48.420 [INFO][4071] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0 whisker-5cb844f4c4- calico-system 10f255ff-eb3e-4053-b0b8-b528af157243 941 0 2025-09-13 00:17:23 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5cb844f4c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5cb844f4c4-j5lmn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8a6915932d0 [] [] }} ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Namespace="calico-system" Pod="whisker-5cb844f4c4-j5lmn" WorkloadEndpoint="localhost-k8s-whisker--5cb844f4c4--j5lmn-" Sep 13 00:17:50.261195 containerd[1557]: 2025-09-13 00:17:48.420 [INFO][4071] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Namespace="calico-system" Pod="whisker-5cb844f4c4-j5lmn" WorkloadEndpoint="localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0" Sep 13 00:17:50.261195 containerd[1557]: 2025-09-13 00:17:48.566 [INFO][4142] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" HandleID="k8s-pod-network.75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Workload="localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0" Sep 13 00:17:50.261955 containerd[1557]: 2025-09-13 00:17:48.568 [INFO][4142] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" HandleID="k8s-pod-network.75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Workload="localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df9c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5cb844f4c4-j5lmn", "timestamp":"2025-09-13 00:17:48.565954865 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:17:50.261955 containerd[1557]: 2025-09-13 00:17:48.574 [INFO][4142] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:50.261955 containerd[1557]: 2025-09-13 00:17:48.911 [INFO][4142] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:50.261955 containerd[1557]: 2025-09-13 00:17:48.911 [INFO][4142] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:17:50.261955 containerd[1557]: 2025-09-13 00:17:48.925 [INFO][4142] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" host="localhost" Sep 13 00:17:50.261955 containerd[1557]: 2025-09-13 00:17:49.167 [INFO][4142] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:17:50.261955 containerd[1557]: 2025-09-13 00:17:49.601 [INFO][4142] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:17:50.261955 containerd[1557]: 2025-09-13 00:17:49.603 [INFO][4142] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:50.261955 containerd[1557]: 2025-09-13 00:17:49.634 [INFO][4142] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:50.261955 containerd[1557]: 2025-09-13 00:17:49.634 [INFO][4142] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" host="localhost" Sep 13 00:17:50.262237 containerd[1557]: 2025-09-13 00:17:49.635 [INFO][4142] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719 Sep 13 00:17:50.262237 containerd[1557]: 2025-09-13 00:17:49.780 [INFO][4142] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" host="localhost" Sep 13 00:17:50.262237 containerd[1557]: 2025-09-13 00:17:50.049 [INFO][4142] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" host="localhost" Sep 13 00:17:50.262237 containerd[1557]: 2025-09-13 00:17:50.049 [INFO][4142] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" host="localhost" Sep 13 00:17:50.262237 containerd[1557]: 2025-09-13 00:17:50.049 [INFO][4142] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:50.262237 containerd[1557]: 2025-09-13 00:17:50.049 [INFO][4142] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" HandleID="k8s-pod-network.75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Workload="localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0" Sep 13 00:17:50.264107 containerd[1557]: 2025-09-13 00:17:50.056 [INFO][4071] cni-plugin/k8s.go 418: Populated endpoint ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Namespace="calico-system" Pod="whisker-5cb844f4c4-j5lmn" WorkloadEndpoint="localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0", GenerateName:"whisker-5cb844f4c4-", Namespace:"calico-system", SelfLink:"", UID:"10f255ff-eb3e-4053-b0b8-b528af157243", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5cb844f4c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5cb844f4c4-j5lmn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8a6915932d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:50.264107 containerd[1557]: 2025-09-13 00:17:50.057 [INFO][4071] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Namespace="calico-system" Pod="whisker-5cb844f4c4-j5lmn" WorkloadEndpoint="localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0" Sep 13 00:17:50.264206 containerd[1557]: 2025-09-13 00:17:50.057 [INFO][4071] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a6915932d0 ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Namespace="calico-system" Pod="whisker-5cb844f4c4-j5lmn" WorkloadEndpoint="localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0" Sep 13 00:17:50.264206 containerd[1557]: 2025-09-13 00:17:50.065 [INFO][4071] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Namespace="calico-system" Pod="whisker-5cb844f4c4-j5lmn" WorkloadEndpoint="localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0" Sep 13 00:17:50.264259 containerd[1557]: 2025-09-13 00:17:50.066 [INFO][4071] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Namespace="calico-system" Pod="whisker-5cb844f4c4-j5lmn" WorkloadEndpoint="localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0", GenerateName:"whisker-5cb844f4c4-", Namespace:"calico-system", SelfLink:"", UID:"10f255ff-eb3e-4053-b0b8-b528af157243", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5cb844f4c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719", Pod:"whisker-5cb844f4c4-j5lmn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8a6915932d0", MAC:"8e:e2:b1:bf:c0:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:50.264319 containerd[1557]: 2025-09-13 00:17:50.251 [INFO][4071] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Namespace="calico-system" Pod="whisker-5cb844f4c4-j5lmn" WorkloadEndpoint="localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0" Sep 13 00:17:50.267129 systemd-networkd[1459]: cali215e0c78b82: Gained IPv6LL Sep 13 00:17:50.295775 sshd[4348]: Connection closed by 10.0.0.1 port 60332 Sep 13 00:17:50.296724 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:50.306615 systemd[1]: sshd@9-10.0.0.20:22-10.0.0.1:60332.service: Deactivated successfully. Sep 13 00:17:50.308110 systemd-logind[1543]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:17:50.312288 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:17:50.335134 containerd[1557]: time="2025-09-13T00:17:50.335035211Z" level=info msg="connecting to shim 3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c" address="unix:///run/containerd/s/0db2b2c81f464c44699c06badc16acf69704cedccbfe5f36d00efc0b1b66c8cc" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:17:50.341350 containerd[1557]: time="2025-09-13T00:17:50.340488566Z" level=info msg="connecting to shim 42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b" address="unix:///run/containerd/s/7ce41a32dd21c5d5fe9bdcedabb1ae2d8eb6db5a88c957badbe4930659418190" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:17:50.344844 systemd-logind[1543]: Removed session 10. Sep 13 00:17:50.402205 containerd[1557]: time="2025-09-13T00:17:50.401064375Z" level=info msg="connecting to shim 75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" address="unix:///run/containerd/s/2b874cbadf84c92b061816214f3970e6f67166576bab1c4c3fcc812616c9bbd6" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:17:50.441122 systemd[1]: Started cri-containerd-3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c.scope - libcontainer container 3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c. Sep 13 00:17:50.443942 systemd[1]: Started cri-containerd-42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b.scope - libcontainer container 42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b. Sep 13 00:17:50.449017 systemd[1]: Started cri-containerd-75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719.scope - libcontainer container 75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719. Sep 13 00:17:50.469750 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:17:50.474213 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:17:50.475873 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:17:50.532593 containerd[1557]: time="2025-09-13T00:17:50.532102286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86779ddf76-kmlm2,Uid:e40e6753-fead-4584-935f-903769032081,Namespace:calico-system,Attempt:0,} returns sandbox id \"3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c\"" Sep 13 00:17:50.534851 containerd[1557]: time="2025-09-13T00:17:50.534822847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cb844f4c4-j5lmn,Uid:10f255ff-eb3e-4053-b0b8-b528af157243,Namespace:calico-system,Attempt:0,} returns sandbox id \"75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719\"" Sep 13 00:17:50.540481 containerd[1557]: time="2025-09-13T00:17:50.540394855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-x8s5r,Uid:6e6da8c5-880e-4731-8be3-963452525c85,Namespace:calico-system,Attempt:0,} returns sandbox id \"42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b\"" Sep 13 00:17:50.651075 systemd-networkd[1459]: cali33a119e4e64: Gained IPv6LL Sep 13 00:17:50.692760 systemd-networkd[1459]: vxlan.calico: Link UP Sep 13 00:17:50.692777 systemd-networkd[1459]: vxlan.calico: Gained carrier Sep 13 00:17:51.164464 systemd-networkd[1459]: cali8a6915932d0: Gained IPv6LL Sep 13 00:17:51.351937 containerd[1557]: time="2025-09-13T00:17:51.351863718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:51.352785 containerd[1557]: time="2025-09-13T00:17:51.352699650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:17:51.354390 containerd[1557]: time="2025-09-13T00:17:51.354350312Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:51.356365 containerd[1557]: time="2025-09-13T00:17:51.356309435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:51.356855 containerd[1557]: time="2025-09-13T00:17:51.356792573Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.264823475s" Sep 13 00:17:51.356855 containerd[1557]: time="2025-09-13T00:17:51.356837337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:17:51.358299 containerd[1557]: time="2025-09-13T00:17:51.358256064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:17:51.359884 containerd[1557]: time="2025-09-13T00:17:51.359829472Z" level=info msg="CreateContainer within sandbox \"f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:17:52.318076 containerd[1557]: time="2025-09-13T00:17:52.317661654Z" level=info msg="Container 1f8f84be460846a5de2c7000bcb753bb4d5e7833b7c5a5c9c2962ef2c9bab67b: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:17:52.480153 containerd[1557]: time="2025-09-13T00:17:52.479614745Z" level=info msg="CreateContainer within sandbox \"f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1f8f84be460846a5de2c7000bcb753bb4d5e7833b7c5a5c9c2962ef2c9bab67b\"" Sep 13 00:17:52.480755 containerd[1557]: time="2025-09-13T00:17:52.480727378Z" level=info msg="StartContainer for \"1f8f84be460846a5de2c7000bcb753bb4d5e7833b7c5a5c9c2962ef2c9bab67b\"" Sep 13 00:17:52.482968 containerd[1557]: time="2025-09-13T00:17:52.482382381Z" level=info msg="connecting to shim 1f8f84be460846a5de2c7000bcb753bb4d5e7833b7c5a5c9c2962ef2c9bab67b" address="unix:///run/containerd/s/a2ebcdd955134c1fed9b7fca06fe4fac261fc8717c45af2df82a7bd7bad1a221" protocol=ttrpc version=3 Sep 13 00:17:52.579296 systemd[1]: Started cri-containerd-1f8f84be460846a5de2c7000bcb753bb4d5e7833b7c5a5c9c2962ef2c9bab67b.scope - libcontainer container 1f8f84be460846a5de2c7000bcb753bb4d5e7833b7c5a5c9c2962ef2c9bab67b. Sep 13 00:17:52.707109 systemd-networkd[1459]: vxlan.calico: Gained IPv6LL Sep 13 00:17:52.836601 containerd[1557]: time="2025-09-13T00:17:52.836448797Z" level=info msg="StartContainer for \"1f8f84be460846a5de2c7000bcb753bb4d5e7833b7c5a5c9c2962ef2c9bab67b\" returns successfully" Sep 13 00:17:54.126534 containerd[1557]: time="2025-09-13T00:17:54.126460929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568d757654-xnwhr,Uid:db42c4bd-fd5b-4a58-a0f9-c81216fe7c89,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:17:54.411619 systemd-networkd[1459]: calidaca0fbb849: Link UP Sep 13 00:17:54.416288 systemd-networkd[1459]: calidaca0fbb849: Gained carrier Sep 13 00:17:54.444406 containerd[1557]: 2025-09-13 00:17:54.319 [INFO][4657] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--568d757654--xnwhr-eth0 calico-apiserver-568d757654- calico-apiserver db42c4bd-fd5b-4a58-a0f9-c81216fe7c89 871 0 2025-09-13 00:17:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:568d757654 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-568d757654-xnwhr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidaca0fbb849 [] [] }} ContainerID="2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e" Namespace="calico-apiserver" Pod="calico-apiserver-568d757654-xnwhr" WorkloadEndpoint="localhost-k8s-calico--apiserver--568d757654--xnwhr-" Sep 13 00:17:54.444406 containerd[1557]: 2025-09-13 00:17:54.319 [INFO][4657] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e" Namespace="calico-apiserver" Pod="calico-apiserver-568d757654-xnwhr" WorkloadEndpoint="localhost-k8s-calico--apiserver--568d757654--xnwhr-eth0" Sep 13 00:17:54.444406 containerd[1557]: 2025-09-13 00:17:54.361 [INFO][4675] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e" HandleID="k8s-pod-network.2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e" Workload="localhost-k8s-calico--apiserver--568d757654--xnwhr-eth0" Sep 13 00:17:54.444969 containerd[1557]: 2025-09-13 00:17:54.361 [INFO][4675] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e" HandleID="k8s-pod-network.2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e" Workload="localhost-k8s-calico--apiserver--568d757654--xnwhr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6f60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-568d757654-xnwhr", "timestamp":"2025-09-13 00:17:54.361408084 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:17:54.444969 containerd[1557]: 2025-09-13 00:17:54.361 [INFO][4675] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:54.444969 containerd[1557]: 2025-09-13 00:17:54.362 [INFO][4675] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:54.444969 containerd[1557]: 2025-09-13 00:17:54.362 [INFO][4675] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:17:54.444969 containerd[1557]: 2025-09-13 00:17:54.373 [INFO][4675] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e" host="localhost" Sep 13 00:17:54.444969 containerd[1557]: 2025-09-13 00:17:54.378 [INFO][4675] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:17:54.444969 containerd[1557]: 2025-09-13 00:17:54.383 [INFO][4675] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:17:54.444969 containerd[1557]: 2025-09-13 00:17:54.385 [INFO][4675] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:54.444969 containerd[1557]: 2025-09-13 00:17:54.387 [INFO][4675] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:54.444969 containerd[1557]: 2025-09-13 00:17:54.387 [INFO][4675] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e" host="localhost" Sep 13 00:17:54.445446 containerd[1557]: 2025-09-13 00:17:54.389 [INFO][4675] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e Sep 13 00:17:54.445446 containerd[1557]: 2025-09-13 00:17:54.393 [INFO][4675] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e" host="localhost" Sep 13 00:17:54.445446 containerd[1557]: 2025-09-13 00:17:54.402 [INFO][4675] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e" host="localhost" Sep 13 00:17:54.445446 containerd[1557]: 2025-09-13 00:17:54.402 [INFO][4675] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e" host="localhost" Sep 13 00:17:54.445446 containerd[1557]: 2025-09-13 00:17:54.402 [INFO][4675] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:54.445446 containerd[1557]: 2025-09-13 00:17:54.402 [INFO][4675] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e" HandleID="k8s-pod-network.2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e" Workload="localhost-k8s-calico--apiserver--568d757654--xnwhr-eth0" Sep 13 00:17:54.445629 containerd[1557]: 2025-09-13 00:17:54.406 [INFO][4657] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e" Namespace="calico-apiserver" Pod="calico-apiserver-568d757654-xnwhr" WorkloadEndpoint="localhost-k8s-calico--apiserver--568d757654--xnwhr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--568d757654--xnwhr-eth0", GenerateName:"calico-apiserver-568d757654-", Namespace:"calico-apiserver", SelfLink:"", UID:"db42c4bd-fd5b-4a58-a0f9-c81216fe7c89", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568d757654", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-568d757654-xnwhr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidaca0fbb849", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:54.445708 containerd[1557]: 2025-09-13 00:17:54.406 [INFO][4657] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e" Namespace="calico-apiserver" Pod="calico-apiserver-568d757654-xnwhr" WorkloadEndpoint="localhost-k8s-calico--apiserver--568d757654--xnwhr-eth0" Sep 13 00:17:54.445708 containerd[1557]: 2025-09-13 00:17:54.406 [INFO][4657] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidaca0fbb849 ContainerID="2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e" Namespace="calico-apiserver" Pod="calico-apiserver-568d757654-xnwhr" WorkloadEndpoint="localhost-k8s-calico--apiserver--568d757654--xnwhr-eth0" Sep 13 00:17:54.445708 containerd[1557]: 2025-09-13 00:17:54.417 [INFO][4657] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e" Namespace="calico-apiserver" Pod="calico-apiserver-568d757654-xnwhr" WorkloadEndpoint="localhost-k8s-calico--apiserver--568d757654--xnwhr-eth0" Sep 13 00:17:54.445804 containerd[1557]: 2025-09-13 00:17:54.420 [INFO][4657] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e" Namespace="calico-apiserver" Pod="calico-apiserver-568d757654-xnwhr" WorkloadEndpoint="localhost-k8s-calico--apiserver--568d757654--xnwhr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--568d757654--xnwhr-eth0", GenerateName:"calico-apiserver-568d757654-", Namespace:"calico-apiserver", SelfLink:"", UID:"db42c4bd-fd5b-4a58-a0f9-c81216fe7c89", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568d757654", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e", Pod:"calico-apiserver-568d757654-xnwhr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidaca0fbb849", MAC:"9a:6a:b5:71:ca:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:54.445881 containerd[1557]: 2025-09-13 00:17:54.434 [INFO][4657] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e" Namespace="calico-apiserver" Pod="calico-apiserver-568d757654-xnwhr" WorkloadEndpoint="localhost-k8s-calico--apiserver--568d757654--xnwhr-eth0" Sep 13 00:17:54.483455 containerd[1557]: time="2025-09-13T00:17:54.483388112Z" level=info msg="connecting to shim 2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e" address="unix:///run/containerd/s/d29c111b673f1468d7838c88dae1a6f81154efe8bd8cf1db7e732395eb693c1c" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:17:54.520101 systemd[1]: Started cri-containerd-2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e.scope - libcontainer container 2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e. Sep 13 00:17:54.546484 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:17:54.591389 containerd[1557]: time="2025-09-13T00:17:54.591333746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568d757654-xnwhr,Uid:db42c4bd-fd5b-4a58-a0f9-c81216fe7c89,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e\"" Sep 13 00:17:54.930394 containerd[1557]: time="2025-09-13T00:17:54.930333436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:54.931169 containerd[1557]: time="2025-09-13T00:17:54.931115568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:17:54.932422 containerd[1557]: time="2025-09-13T00:17:54.932364770Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:54.935548 containerd[1557]: time="2025-09-13T00:17:54.935517156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:54.936364 containerd[1557]: time="2025-09-13T00:17:54.936332370Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.578043786s" Sep 13 00:17:54.936452 containerd[1557]: time="2025-09-13T00:17:54.936369000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:17:54.938011 containerd[1557]: time="2025-09-13T00:17:54.937966237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:17:54.948525 containerd[1557]: time="2025-09-13T00:17:54.948482537Z" level=info msg="CreateContainer within sandbox \"3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:17:55.002307 containerd[1557]: time="2025-09-13T00:17:55.002239625Z" level=info msg="Container 53d631dc886b32aa2b6eee0341195cc255aff5b55b2451928dbbda91b41bdd7e: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:17:55.013217 containerd[1557]: time="2025-09-13T00:17:55.013159916Z" level=info msg="CreateContainer within sandbox \"3078d38a597b3cc19d68921e4df7e633f00a5cdce993104a4a52cbfae808e21c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"53d631dc886b32aa2b6eee0341195cc255aff5b55b2451928dbbda91b41bdd7e\"" Sep 13 00:17:55.013834 containerd[1557]: time="2025-09-13T00:17:55.013785504Z" level=info msg="StartContainer for \"53d631dc886b32aa2b6eee0341195cc255aff5b55b2451928dbbda91b41bdd7e\"" Sep 13 00:17:55.015171 containerd[1557]: time="2025-09-13T00:17:55.015104639Z" level=info msg="connecting to shim 53d631dc886b32aa2b6eee0341195cc255aff5b55b2451928dbbda91b41bdd7e" address="unix:///run/containerd/s/0db2b2c81f464c44699c06badc16acf69704cedccbfe5f36d00efc0b1b66c8cc" protocol=ttrpc version=3 Sep 13 00:17:55.046072 systemd[1]: Started cri-containerd-53d631dc886b32aa2b6eee0341195cc255aff5b55b2451928dbbda91b41bdd7e.scope - libcontainer container 53d631dc886b32aa2b6eee0341195cc255aff5b55b2451928dbbda91b41bdd7e. Sep 13 00:17:55.156208 containerd[1557]: time="2025-09-13T00:17:55.156141806Z" level=info msg="StartContainer for \"53d631dc886b32aa2b6eee0341195cc255aff5b55b2451928dbbda91b41bdd7e\" returns successfully" Sep 13 00:17:55.319039 systemd[1]: Started sshd@10-10.0.0.20:22-10.0.0.1:60782.service - OpenSSH per-connection server daemon (10.0.0.1:60782). Sep 13 00:17:55.387853 sshd[4788]: Accepted publickey for core from 10.0.0.1 port 60782 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:17:55.389879 sshd-session[4788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:55.395633 systemd-logind[1543]: New session 11 of user core. Sep 13 00:17:55.405104 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:17:55.552550 sshd[4790]: Connection closed by 10.0.0.1 port 60782 Sep 13 00:17:55.553254 sshd-session[4788]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:55.559307 systemd[1]: sshd@10-10.0.0.20:22-10.0.0.1:60782.service: Deactivated successfully. Sep 13 00:17:55.562441 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:17:55.563808 systemd-logind[1543]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:17:55.565371 systemd-logind[1543]: Removed session 11. Sep 13 00:17:55.771150 systemd-networkd[1459]: calidaca0fbb849: Gained IPv6LL Sep 13 00:17:56.135661 containerd[1557]: time="2025-09-13T00:17:56.135512023Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53d631dc886b32aa2b6eee0341195cc255aff5b55b2451928dbbda91b41bdd7e\" id:\"ec3beff6cb79da37adffb30ffa1908bd696871e6ec3020498cd768c354a96b76\" pid:4820 exited_at:{seconds:1757722676 nanos:133354087}" Sep 13 00:17:56.231231 kubelet[2747]: I0913 00:17:56.231112 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-86779ddf76-kmlm2" podStartSLOduration=30.828687648 podStartE2EDuration="35.231088451s" podCreationTimestamp="2025-09-13 00:17:21 +0000 UTC" firstStartedPulling="2025-09-13 00:17:50.53495234 +0000 UTC m=+51.510818859" lastFinishedPulling="2025-09-13 00:17:54.937353143 +0000 UTC m=+55.913219662" observedRunningTime="2025-09-13 00:17:56.230819594 +0000 UTC m=+57.206686113" watchObservedRunningTime="2025-09-13 00:17:56.231088451 +0000 UTC m=+57.206954970" Sep 13 00:17:57.118227 kubelet[2747]: E0913 00:17:57.118180 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:57.118790 containerd[1557]: time="2025-09-13T00:17:57.118736147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5nzdh,Uid:fc8c958f-db82-4584-b330-b71768df1e39,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:57.118790 containerd[1557]: time="2025-09-13T00:17:57.118765061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568d757654-spds8,Uid:9249a12e-d069-4512-b81e-37ec9c950ff4,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:17:57.255543 systemd-networkd[1459]: cali7e1f6080c94: Link UP Sep 13 00:17:57.256500 containerd[1557]: time="2025-09-13T00:17:57.256458474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:57.256741 systemd-networkd[1459]: cali7e1f6080c94: Gained carrier Sep 13 00:17:57.258620 containerd[1557]: time="2025-09-13T00:17:57.258563032Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:17:57.260227 containerd[1557]: time="2025-09-13T00:17:57.260195571Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:57.265770 containerd[1557]: time="2025-09-13T00:17:57.265727953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:57.268895 containerd[1557]: time="2025-09-13T00:17:57.268782463Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.330759248s" Sep 13 00:17:57.268895 containerd[1557]: time="2025-09-13T00:17:57.268835222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:17:57.269868 containerd[1557]: time="2025-09-13T00:17:57.269733045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:17:57.271052 containerd[1557]: time="2025-09-13T00:17:57.271016064Z" level=info msg="CreateContainer within sandbox \"75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:17:57.273101 containerd[1557]: 2025-09-13 00:17:57.174 [INFO][4836] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--5nzdh-eth0 coredns-668d6bf9bc- kube-system fc8c958f-db82-4584-b330-b71768df1e39 864 0 2025-09-13 00:17:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-5nzdh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7e1f6080c94 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nzdh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5nzdh-" Sep 13 00:17:57.273101 containerd[1557]: 2025-09-13 00:17:57.174 [INFO][4836] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nzdh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5nzdh-eth0" Sep 13 00:17:57.273101 containerd[1557]: 2025-09-13 00:17:57.212 [INFO][4865] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a" HandleID="k8s-pod-network.da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a" Workload="localhost-k8s-coredns--668d6bf9bc--5nzdh-eth0" Sep 13 00:17:57.273411 containerd[1557]: 2025-09-13 00:17:57.212 [INFO][4865] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a" HandleID="k8s-pod-network.da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a" Workload="localhost-k8s-coredns--668d6bf9bc--5nzdh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5450), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-5nzdh", "timestamp":"2025-09-13 00:17:57.212706723 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:17:57.273411 containerd[1557]: 2025-09-13 00:17:57.213 [INFO][4865] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:57.273411 containerd[1557]: 2025-09-13 00:17:57.213 [INFO][4865] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:57.273411 containerd[1557]: 2025-09-13 00:17:57.213 [INFO][4865] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:17:57.273411 containerd[1557]: 2025-09-13 00:17:57.222 [INFO][4865] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a" host="localhost" Sep 13 00:17:57.273411 containerd[1557]: 2025-09-13 00:17:57.226 [INFO][4865] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:17:57.273411 containerd[1557]: 2025-09-13 00:17:57.231 [INFO][4865] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:17:57.273411 containerd[1557]: 2025-09-13 00:17:57.232 [INFO][4865] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:57.273411 containerd[1557]: 2025-09-13 00:17:57.235 [INFO][4865] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:57.273411 containerd[1557]: 2025-09-13 00:17:57.235 [INFO][4865] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a" host="localhost" Sep 13 00:17:57.273691 containerd[1557]: 2025-09-13 00:17:57.236 [INFO][4865] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a Sep 13 00:17:57.273691 containerd[1557]: 2025-09-13 00:17:57.240 [INFO][4865] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a" host="localhost" Sep 13 00:17:57.273691 containerd[1557]: 2025-09-13 00:17:57.248 [INFO][4865] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a" host="localhost" Sep 13 00:17:57.273691 containerd[1557]: 2025-09-13 00:17:57.248 [INFO][4865] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a" host="localhost" Sep 13 00:17:57.273691 containerd[1557]: 2025-09-13 00:17:57.248 [INFO][4865] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:57.273691 containerd[1557]: 2025-09-13 00:17:57.248 [INFO][4865] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a" HandleID="k8s-pod-network.da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a" Workload="localhost-k8s-coredns--668d6bf9bc--5nzdh-eth0" Sep 13 00:17:57.273840 containerd[1557]: 2025-09-13 00:17:57.251 [INFO][4836] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nzdh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5nzdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5nzdh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fc8c958f-db82-4584-b330-b71768df1e39", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-5nzdh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e1f6080c94", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:57.273950 containerd[1557]: 2025-09-13 00:17:57.252 [INFO][4836] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nzdh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5nzdh-eth0" Sep 13 00:17:57.273950 containerd[1557]: 2025-09-13 00:17:57.252 [INFO][4836] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e1f6080c94 ContainerID="da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nzdh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5nzdh-eth0" Sep 13 00:17:57.273950 containerd[1557]: 2025-09-13 00:17:57.254 [INFO][4836] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nzdh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5nzdh-eth0" Sep 13 00:17:57.274036 containerd[1557]: 2025-09-13 00:17:57.255 [INFO][4836] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nzdh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5nzdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5nzdh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fc8c958f-db82-4584-b330-b71768df1e39", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a", Pod:"coredns-668d6bf9bc-5nzdh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e1f6080c94", MAC:"ea:ed:f5:89:4f:76", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:57.274036 containerd[1557]: 2025-09-13 00:17:57.268 [INFO][4836] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nzdh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5nzdh-eth0" Sep 13 00:17:57.299430 containerd[1557]: time="2025-09-13T00:17:57.299336878Z" level=info msg="Container 4193c6d354e8c979cb55e706ef0c703c2e95ec64e3d326e995d00f1b8ec34ed3: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:17:57.311793 containerd[1557]: time="2025-09-13T00:17:57.311695912Z" level=info msg="CreateContainer within sandbox \"75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4193c6d354e8c979cb55e706ef0c703c2e95ec64e3d326e995d00f1b8ec34ed3\"" Sep 13 00:17:57.312826 containerd[1557]: time="2025-09-13T00:17:57.312794013Z" level=info msg="StartContainer for \"4193c6d354e8c979cb55e706ef0c703c2e95ec64e3d326e995d00f1b8ec34ed3\"" Sep 13 00:17:57.313856 containerd[1557]: time="2025-09-13T00:17:57.313805551Z" level=info msg="connecting to shim 4193c6d354e8c979cb55e706ef0c703c2e95ec64e3d326e995d00f1b8ec34ed3" address="unix:///run/containerd/s/2b874cbadf84c92b061816214f3970e6f67166576bab1c4c3fcc812616c9bbd6" protocol=ttrpc version=3 Sep 13 00:17:57.316410 containerd[1557]: time="2025-09-13T00:17:57.316296418Z" level=info msg="connecting to shim da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a" address="unix:///run/containerd/s/f819a6cb827419a1e39b03995b1f3d73045640a1cd18d01b3c2349e1b4bc44b1" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:17:57.344058 systemd[1]: Started cri-containerd-4193c6d354e8c979cb55e706ef0c703c2e95ec64e3d326e995d00f1b8ec34ed3.scope - libcontainer container 4193c6d354e8c979cb55e706ef0c703c2e95ec64e3d326e995d00f1b8ec34ed3. Sep 13 00:17:57.353119 systemd[1]: Started cri-containerd-da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a.scope - libcontainer container da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a. Sep 13 00:17:57.370488 systemd-networkd[1459]: cali6aed1578a47: Link UP Sep 13 00:17:57.372017 systemd-networkd[1459]: cali6aed1578a47: Gained carrier Sep 13 00:17:57.381930 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:17:57.389720 containerd[1557]: 2025-09-13 00:17:57.174 [INFO][4849] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--568d757654--spds8-eth0 calico-apiserver-568d757654- calico-apiserver 9249a12e-d069-4512-b81e-37ec9c950ff4 861 0 2025-09-13 00:17:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:568d757654 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-568d757654-spds8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6aed1578a47 [] [] }} ContainerID="ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa" Namespace="calico-apiserver" Pod="calico-apiserver-568d757654-spds8" WorkloadEndpoint="localhost-k8s-calico--apiserver--568d757654--spds8-" Sep 13 00:17:57.389720 containerd[1557]: 2025-09-13 00:17:57.175 [INFO][4849] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa" Namespace="calico-apiserver" Pod="calico-apiserver-568d757654-spds8" WorkloadEndpoint="localhost-k8s-calico--apiserver--568d757654--spds8-eth0" Sep 13 00:17:57.389720 containerd[1557]: 2025-09-13 00:17:57.215 [INFO][4871] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa" HandleID="k8s-pod-network.ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa" Workload="localhost-k8s-calico--apiserver--568d757654--spds8-eth0" Sep 13 00:17:57.389720 containerd[1557]: 2025-09-13 00:17:57.215 [INFO][4871] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa" HandleID="k8s-pod-network.ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa" Workload="localhost-k8s-calico--apiserver--568d757654--spds8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a2f30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-568d757654-spds8", "timestamp":"2025-09-13 00:17:57.215137075 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:17:57.389720 containerd[1557]: 2025-09-13 00:17:57.215 [INFO][4871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:57.389720 containerd[1557]: 2025-09-13 00:17:57.248 [INFO][4871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:57.389720 containerd[1557]: 2025-09-13 00:17:57.248 [INFO][4871] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:17:57.389720 containerd[1557]: 2025-09-13 00:17:57.325 [INFO][4871] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa" host="localhost" Sep 13 00:17:57.389720 containerd[1557]: 2025-09-13 00:17:57.332 [INFO][4871] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:17:57.389720 containerd[1557]: 2025-09-13 00:17:57.339 [INFO][4871] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:17:57.389720 containerd[1557]: 2025-09-13 00:17:57.342 [INFO][4871] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:57.389720 containerd[1557]: 2025-09-13 00:17:57.346 [INFO][4871] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:57.389720 containerd[1557]: 2025-09-13 00:17:57.346 [INFO][4871] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa" host="localhost" Sep 13 00:17:57.389720 containerd[1557]: 2025-09-13 00:17:57.348 [INFO][4871] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa Sep 13 00:17:57.389720 containerd[1557]: 2025-09-13 00:17:57.352 [INFO][4871] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa" host="localhost" Sep 13 00:17:57.389720 containerd[1557]: 2025-09-13 00:17:57.362 [INFO][4871] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa" host="localhost" Sep 13 00:17:57.389720 containerd[1557]: 2025-09-13 00:17:57.362 [INFO][4871] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa" host="localhost" Sep 13 00:17:57.389720 containerd[1557]: 2025-09-13 00:17:57.362 [INFO][4871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:57.389720 containerd[1557]: 2025-09-13 00:17:57.362 [INFO][4871] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa" HandleID="k8s-pod-network.ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa" Workload="localhost-k8s-calico--apiserver--568d757654--spds8-eth0" Sep 13 00:17:57.390594 containerd[1557]: 2025-09-13 00:17:57.367 [INFO][4849] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa" Namespace="calico-apiserver" Pod="calico-apiserver-568d757654-spds8" WorkloadEndpoint="localhost-k8s-calico--apiserver--568d757654--spds8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--568d757654--spds8-eth0", GenerateName:"calico-apiserver-568d757654-", Namespace:"calico-apiserver", SelfLink:"", UID:"9249a12e-d069-4512-b81e-37ec9c950ff4", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568d757654", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-568d757654-spds8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6aed1578a47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:57.390594 containerd[1557]: 2025-09-13 00:17:57.367 [INFO][4849] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa" Namespace="calico-apiserver" Pod="calico-apiserver-568d757654-spds8" WorkloadEndpoint="localhost-k8s-calico--apiserver--568d757654--spds8-eth0" Sep 13 00:17:57.390594 containerd[1557]: 2025-09-13 00:17:57.367 [INFO][4849] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6aed1578a47 ContainerID="ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa" Namespace="calico-apiserver" Pod="calico-apiserver-568d757654-spds8" WorkloadEndpoint="localhost-k8s-calico--apiserver--568d757654--spds8-eth0" Sep 13 00:17:57.390594 containerd[1557]: 2025-09-13 00:17:57.371 [INFO][4849] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa" Namespace="calico-apiserver" Pod="calico-apiserver-568d757654-spds8" WorkloadEndpoint="localhost-k8s-calico--apiserver--568d757654--spds8-eth0" Sep 13 00:17:57.390594 containerd[1557]: 2025-09-13 00:17:57.373 [INFO][4849] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa" Namespace="calico-apiserver" Pod="calico-apiserver-568d757654-spds8" WorkloadEndpoint="localhost-k8s-calico--apiserver--568d757654--spds8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--568d757654--spds8-eth0", GenerateName:"calico-apiserver-568d757654-", Namespace:"calico-apiserver", SelfLink:"", UID:"9249a12e-d069-4512-b81e-37ec9c950ff4", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568d757654", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa", Pod:"calico-apiserver-568d757654-spds8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6aed1578a47", MAC:"8e:6b:82:3b:36:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:57.390594 containerd[1557]: 2025-09-13 00:17:57.385 [INFO][4849] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa" Namespace="calico-apiserver" Pod="calico-apiserver-568d757654-spds8" WorkloadEndpoint="localhost-k8s-calico--apiserver--568d757654--spds8-eth0" Sep 13 00:17:57.430512 containerd[1557]: time="2025-09-13T00:17:57.430381432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5nzdh,Uid:fc8c958f-db82-4584-b330-b71768df1e39,Namespace:kube-system,Attempt:0,} returns sandbox id \"da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a\"" Sep 13 00:17:57.431421 kubelet[2747]: E0913 00:17:57.431389 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:57.433321 containerd[1557]: time="2025-09-13T00:17:57.433261013Z" level=info msg="StartContainer for \"4193c6d354e8c979cb55e706ef0c703c2e95ec64e3d326e995d00f1b8ec34ed3\" returns successfully" Sep 13 00:17:57.445943 containerd[1557]: time="2025-09-13T00:17:57.445230232Z" level=info msg="CreateContainer within sandbox \"da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:17:57.452029 containerd[1557]: time="2025-09-13T00:17:57.451772679Z" level=info msg="connecting to shim ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa" address="unix:///run/containerd/s/ab8e88d7dec3bf08596fa0e413200c655c4b9a17be1836f69ae8db3546785f4b" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:17:57.459324 containerd[1557]: time="2025-09-13T00:17:57.459275026Z" level=info msg="Container a267dc3e357803ce3345c111d0c4c68fb241b75df10dfe12c828c0065fe6df78: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:17:57.468326 containerd[1557]: time="2025-09-13T00:17:57.468265428Z" level=info msg="CreateContainer within sandbox \"da642f60662b3066429403481dfb0e8b8c4e655aec333d18fe57f22f1c43b82a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a267dc3e357803ce3345c111d0c4c68fb241b75df10dfe12c828c0065fe6df78\"" Sep 13 00:17:57.469246 containerd[1557]: time="2025-09-13T00:17:57.469208487Z" level=info msg="StartContainer for \"a267dc3e357803ce3345c111d0c4c68fb241b75df10dfe12c828c0065fe6df78\"" Sep 13 00:17:57.470949 containerd[1557]: time="2025-09-13T00:17:57.470881852Z" level=info msg="connecting to shim a267dc3e357803ce3345c111d0c4c68fb241b75df10dfe12c828c0065fe6df78" address="unix:///run/containerd/s/f819a6cb827419a1e39b03995b1f3d73045640a1cd18d01b3c2349e1b4bc44b1" protocol=ttrpc version=3 Sep 13 00:17:57.487151 systemd[1]: Started cri-containerd-ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa.scope - libcontainer container ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa. Sep 13 00:17:57.493355 systemd[1]: Started cri-containerd-a267dc3e357803ce3345c111d0c4c68fb241b75df10dfe12c828c0065fe6df78.scope - libcontainer container a267dc3e357803ce3345c111d0c4c68fb241b75df10dfe12c828c0065fe6df78. Sep 13 00:17:57.507159 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:17:57.538040 containerd[1557]: time="2025-09-13T00:17:57.537959136Z" level=info msg="StartContainer for \"a267dc3e357803ce3345c111d0c4c68fb241b75df10dfe12c828c0065fe6df78\" returns successfully" Sep 13 00:17:57.552223 containerd[1557]: time="2025-09-13T00:17:57.552093449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568d757654-spds8,Uid:9249a12e-d069-4512-b81e-37ec9c950ff4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa\"" Sep 13 00:17:58.086085 kubelet[2747]: E0913 00:17:58.086040 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:58.099538 kubelet[2747]: I0913 00:17:58.099452 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5nzdh" podStartSLOduration=53.099424573 podStartE2EDuration="53.099424573s" podCreationTimestamp="2025-09-13 00:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:17:58.098721447 +0000 UTC m=+59.074587966" watchObservedRunningTime="2025-09-13 00:17:58.099424573 +0000 UTC m=+59.075291112" Sep 13 00:17:58.715478 systemd-networkd[1459]: cali7e1f6080c94: Gained IPv6LL Sep 13 00:17:59.096557 kubelet[2747]: E0913 00:17:59.096408 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:59.419160 systemd-networkd[1459]: cali6aed1578a47: Gained IPv6LL Sep 13 00:18:00.099593 kubelet[2747]: E0913 00:18:00.099549 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:00.118355 kubelet[2747]: E0913 00:18:00.118293 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:00.119023 containerd[1557]: time="2025-09-13T00:18:00.118884304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hnfxw,Uid:bacbf801-d4d8-426e-9e88-33f253ebce09,Namespace:kube-system,Attempt:0,}" Sep 13 00:18:00.569960 systemd[1]: Started sshd@11-10.0.0.20:22-10.0.0.1:49878.service - OpenSSH per-connection server daemon (10.0.0.1:49878). Sep 13 00:18:01.181989 sshd[5067]: Accepted publickey for core from 10.0.0.1 port 49878 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:18:01.187705 sshd-session[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:01.207432 systemd-logind[1543]: New session 12 of user core. Sep 13 00:18:01.224374 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:18:01.815070 systemd-networkd[1459]: cali83f990a683c: Link UP Sep 13 00:18:01.820468 systemd-networkd[1459]: cali83f990a683c: Gained carrier Sep 13 00:18:01.982858 containerd[1557]: 2025-09-13 00:18:01.477 [INFO][5071] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--hnfxw-eth0 coredns-668d6bf9bc- kube-system bacbf801-d4d8-426e-9e88-33f253ebce09 872 0 2025-09-13 00:17:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-hnfxw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali83f990a683c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678" Namespace="kube-system" Pod="coredns-668d6bf9bc-hnfxw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hnfxw-" Sep 13 00:18:01.982858 containerd[1557]: 2025-09-13 00:18:01.478 [INFO][5071] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678" Namespace="kube-system" Pod="coredns-668d6bf9bc-hnfxw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hnfxw-eth0" Sep 13 00:18:01.982858 containerd[1557]: 2025-09-13 00:18:01.537 [INFO][5095] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678" HandleID="k8s-pod-network.02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678" Workload="localhost-k8s-coredns--668d6bf9bc--hnfxw-eth0" Sep 13 00:18:01.982858 containerd[1557]: 2025-09-13 00:18:01.537 [INFO][5095] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678" HandleID="k8s-pod-network.02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678" Workload="localhost-k8s-coredns--668d6bf9bc--hnfxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f4e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-hnfxw", "timestamp":"2025-09-13 00:18:01.537176794 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:18:01.982858 containerd[1557]: 2025-09-13 00:18:01.537 [INFO][5095] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:01.982858 containerd[1557]: 2025-09-13 00:18:01.537 [INFO][5095] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:01.982858 containerd[1557]: 2025-09-13 00:18:01.537 [INFO][5095] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:18:01.982858 containerd[1557]: 2025-09-13 00:18:01.564 [INFO][5095] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678" host="localhost" Sep 13 00:18:01.982858 containerd[1557]: 2025-09-13 00:18:01.581 [INFO][5095] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:18:01.982858 containerd[1557]: 2025-09-13 00:18:01.602 [INFO][5095] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:18:01.982858 containerd[1557]: 2025-09-13 00:18:01.607 [INFO][5095] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:18:01.982858 containerd[1557]: 2025-09-13 00:18:01.615 [INFO][5095] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:18:01.982858 containerd[1557]: 2025-09-13 00:18:01.616 [INFO][5095] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678" host="localhost" Sep 13 00:18:01.982858 containerd[1557]: 2025-09-13 00:18:01.625 [INFO][5095] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678 Sep 13 00:18:01.982858 containerd[1557]: 2025-09-13 00:18:01.667 [INFO][5095] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678" host="localhost" Sep 13 00:18:01.982858 containerd[1557]: 2025-09-13 00:18:01.792 [INFO][5095] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678" host="localhost" Sep 13 00:18:01.982858 containerd[1557]: 2025-09-13 00:18:01.792 [INFO][5095] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678" host="localhost" Sep 13 00:18:01.982858 containerd[1557]: 2025-09-13 00:18:01.792 [INFO][5095] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:01.982858 containerd[1557]: 2025-09-13 00:18:01.792 [INFO][5095] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678" HandleID="k8s-pod-network.02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678" Workload="localhost-k8s-coredns--668d6bf9bc--hnfxw-eth0" Sep 13 00:18:01.986657 containerd[1557]: 2025-09-13 00:18:01.798 [INFO][5071] cni-plugin/k8s.go 418: Populated endpoint ContainerID="02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678" Namespace="kube-system" Pod="coredns-668d6bf9bc-hnfxw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hnfxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hnfxw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bacbf801-d4d8-426e-9e88-33f253ebce09", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-hnfxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83f990a683c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:01.986657 containerd[1557]: 2025-09-13 00:18:01.799 [INFO][5071] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678" Namespace="kube-system" Pod="coredns-668d6bf9bc-hnfxw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hnfxw-eth0" Sep 13 00:18:01.986657 containerd[1557]: 2025-09-13 00:18:01.799 [INFO][5071] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83f990a683c ContainerID="02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678" Namespace="kube-system" Pod="coredns-668d6bf9bc-hnfxw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hnfxw-eth0" Sep 13 00:18:01.986657 containerd[1557]: 2025-09-13 00:18:01.808 [INFO][5071] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678" Namespace="kube-system" Pod="coredns-668d6bf9bc-hnfxw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hnfxw-eth0" Sep 13 00:18:01.986657 containerd[1557]: 2025-09-13 00:18:01.812 [INFO][5071] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678" Namespace="kube-system" Pod="coredns-668d6bf9bc-hnfxw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hnfxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hnfxw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bacbf801-d4d8-426e-9e88-33f253ebce09", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 17, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678", Pod:"coredns-668d6bf9bc-hnfxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83f990a683c", MAC:"b2:e1:fa:52:5e:55", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:18:01.986657 containerd[1557]: 2025-09-13 00:18:01.964 [INFO][5071] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678" Namespace="kube-system" Pod="coredns-668d6bf9bc-hnfxw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hnfxw-eth0" Sep 13 00:18:02.055926 sshd[5089]: Connection closed by 10.0.0.1 port 49878 Sep 13 00:18:02.057144 sshd-session[5067]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:02.064437 systemd[1]: sshd@11-10.0.0.20:22-10.0.0.1:49878.service: Deactivated successfully. Sep 13 00:18:02.069584 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:18:02.074377 systemd-logind[1543]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:18:02.076068 systemd-logind[1543]: Removed session 12. Sep 13 00:18:02.710409 containerd[1557]: time="2025-09-13T00:18:02.710339327Z" level=info msg="connecting to shim 02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678" address="unix:///run/containerd/s/c63b3e04b7236de044973eb6de4fed94d7e83ac186c5e170f2408f3c8c164da4" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:18:02.891063 systemd[1]: Started cri-containerd-02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678.scope - libcontainer container 02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678. Sep 13 00:18:02.936886 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:18:03.216014 containerd[1557]: time="2025-09-13T00:18:03.211053174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hnfxw,Uid:bacbf801-d4d8-426e-9e88-33f253ebce09,Namespace:kube-system,Attempt:0,} returns sandbox id \"02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678\"" Sep 13 00:18:03.223120 kubelet[2747]: E0913 00:18:03.223089 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:03.243231 containerd[1557]: time="2025-09-13T00:18:03.241602209Z" level=info msg="CreateContainer within sandbox \"02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:18:03.264880 systemd-networkd[1459]: cali83f990a683c: Gained IPv6LL Sep 13 00:18:03.353669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3254019117.mount: Deactivated successfully. Sep 13 00:18:03.364028 containerd[1557]: time="2025-09-13T00:18:03.360755324Z" level=info msg="Container 7cc3a2292de6cee3fbc2fbac125dd469d000a0559e5c4e48dbc994a6842cd745: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:18:03.665299 containerd[1557]: time="2025-09-13T00:18:03.664808903Z" level=info msg="CreateContainer within sandbox \"02e531a44093a50f045c89f41f35f5f0cda7c5a1ddac8b66ae3c8dbfbbc8c678\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7cc3a2292de6cee3fbc2fbac125dd469d000a0559e5c4e48dbc994a6842cd745\"" Sep 13 00:18:03.666298 containerd[1557]: time="2025-09-13T00:18:03.666176278Z" level=info msg="StartContainer for \"7cc3a2292de6cee3fbc2fbac125dd469d000a0559e5c4e48dbc994a6842cd745\"" Sep 13 00:18:03.671433 containerd[1557]: time="2025-09-13T00:18:03.671368484Z" level=info msg="connecting to shim 7cc3a2292de6cee3fbc2fbac125dd469d000a0559e5c4e48dbc994a6842cd745" address="unix:///run/containerd/s/c63b3e04b7236de044973eb6de4fed94d7e83ac186c5e170f2408f3c8c164da4" protocol=ttrpc version=3 Sep 13 00:18:03.728222 systemd[1]: Started cri-containerd-7cc3a2292de6cee3fbc2fbac125dd469d000a0559e5c4e48dbc994a6842cd745.scope - libcontainer container 7cc3a2292de6cee3fbc2fbac125dd469d000a0559e5c4e48dbc994a6842cd745. Sep 13 00:18:03.750854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount514184064.mount: Deactivated successfully. Sep 13 00:18:04.102094 containerd[1557]: time="2025-09-13T00:18:04.102020768Z" level=info msg="StartContainer for \"7cc3a2292de6cee3fbc2fbac125dd469d000a0559e5c4e48dbc994a6842cd745\" returns successfully" Sep 13 00:18:04.132356 kubelet[2747]: E0913 00:18:04.131823 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:04.441332 kubelet[2747]: I0913 00:18:04.441129 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hnfxw" podStartSLOduration=59.441105313 podStartE2EDuration="59.441105313s" podCreationTimestamp="2025-09-13 00:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:18:04.440324767 +0000 UTC m=+65.416191296" watchObservedRunningTime="2025-09-13 00:18:04.441105313 +0000 UTC m=+65.416971832" Sep 13 00:18:05.133138 kubelet[2747]: E0913 00:18:05.133095 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:06.135566 kubelet[2747]: E0913 00:18:06.135510 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:06.141040 containerd[1557]: time="2025-09-13T00:18:06.140961713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:06.251213 containerd[1557]: time="2025-09-13T00:18:06.251127079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:18:06.323866 containerd[1557]: time="2025-09-13T00:18:06.323804624Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:06.413486 containerd[1557]: time="2025-09-13T00:18:06.413310289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:06.414100 containerd[1557]: time="2025-09-13T00:18:06.414045751Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 9.144263794s" Sep 13 00:18:06.414100 containerd[1557]: time="2025-09-13T00:18:06.414090857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:18:06.415303 containerd[1557]: time="2025-09-13T00:18:06.415270670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:18:06.416550 containerd[1557]: time="2025-09-13T00:18:06.416518291Z" level=info msg="CreateContainer within sandbox \"42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:18:06.689338 containerd[1557]: time="2025-09-13T00:18:06.689176925Z" level=info msg="Container 67d82ddf8ab7fc83c3f37a3be9270dbca50ebaa4b00d92effcbee9c0b7e827a3: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:18:06.844437 containerd[1557]: time="2025-09-13T00:18:06.844358503Z" level=info msg="CreateContainer within sandbox \"42869b8f45910491cf8c139efecc81b03b7383a76997dff3d0001a999158d70b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"67d82ddf8ab7fc83c3f37a3be9270dbca50ebaa4b00d92effcbee9c0b7e827a3\"" Sep 13 00:18:06.845359 containerd[1557]: time="2025-09-13T00:18:06.845287732Z" level=info msg="StartContainer for \"67d82ddf8ab7fc83c3f37a3be9270dbca50ebaa4b00d92effcbee9c0b7e827a3\"" Sep 13 00:18:06.846830 containerd[1557]: time="2025-09-13T00:18:06.846659108Z" level=info msg="connecting to shim 67d82ddf8ab7fc83c3f37a3be9270dbca50ebaa4b00d92effcbee9c0b7e827a3" address="unix:///run/containerd/s/7ce41a32dd21c5d5fe9bdcedabb1ae2d8eb6db5a88c957badbe4930659418190" protocol=ttrpc version=3 Sep 13 00:18:06.884103 systemd[1]: Started cri-containerd-67d82ddf8ab7fc83c3f37a3be9270dbca50ebaa4b00d92effcbee9c0b7e827a3.scope - libcontainer container 67d82ddf8ab7fc83c3f37a3be9270dbca50ebaa4b00d92effcbee9c0b7e827a3. Sep 13 00:18:06.950724 containerd[1557]: time="2025-09-13T00:18:06.950558616Z" level=info msg="StartContainer for \"67d82ddf8ab7fc83c3f37a3be9270dbca50ebaa4b00d92effcbee9c0b7e827a3\" returns successfully" Sep 13 00:18:07.070596 systemd[1]: Started sshd@12-10.0.0.20:22-10.0.0.1:49892.service - OpenSSH per-connection server daemon (10.0.0.1:49892). Sep 13 00:18:07.124177 sshd[5263]: Accepted publickey for core from 10.0.0.1 port 49892 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:18:07.126033 sshd-session[5263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:07.131835 systemd-logind[1543]: New session 13 of user core. Sep 13 00:18:07.139308 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:18:07.162400 kubelet[2747]: I0913 00:18:07.162318 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-x8s5r" podStartSLOduration=31.291413363 podStartE2EDuration="47.16229447s" podCreationTimestamp="2025-09-13 00:17:20 +0000 UTC" firstStartedPulling="2025-09-13 00:17:50.544148867 +0000 UTC m=+51.520015386" lastFinishedPulling="2025-09-13 00:18:06.415029964 +0000 UTC m=+67.390896493" observedRunningTime="2025-09-13 00:18:07.161419813 +0000 UTC m=+68.137286342" watchObservedRunningTime="2025-09-13 00:18:07.16229447 +0000 UTC m=+68.138161009" Sep 13 00:18:07.243356 containerd[1557]: time="2025-09-13T00:18:07.243305631Z" level=info msg="TaskExit event in podsandbox handler container_id:\"67d82ddf8ab7fc83c3f37a3be9270dbca50ebaa4b00d92effcbee9c0b7e827a3\" id:\"1a8c572b8c4224211b442b66e89076266cf4ea777f3fca1ef6adea9319098f16\" pid:5278 exit_status:1 exited_at:{seconds:1757722687 nanos:242043792}" Sep 13 00:18:07.299998 sshd[5266]: Connection closed by 10.0.0.1 port 49892 Sep 13 00:18:07.300507 sshd-session[5263]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:07.314957 systemd[1]: sshd@12-10.0.0.20:22-10.0.0.1:49892.service: Deactivated successfully. Sep 13 00:18:07.317579 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:18:07.318710 systemd-logind[1543]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:18:07.322581 systemd[1]: Started sshd@13-10.0.0.20:22-10.0.0.1:49896.service - OpenSSH per-connection server daemon (10.0.0.1:49896). Sep 13 00:18:07.324069 systemd-logind[1543]: Removed session 13. Sep 13 00:18:07.380101 sshd[5308]: Accepted publickey for core from 10.0.0.1 port 49896 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:18:07.382166 sshd-session[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:07.387676 systemd-logind[1543]: New session 14 of user core. Sep 13 00:18:07.397068 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:18:07.860708 sshd[5310]: Connection closed by 10.0.0.1 port 49896 Sep 13 00:18:07.861188 sshd-session[5308]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:07.873010 systemd[1]: sshd@13-10.0.0.20:22-10.0.0.1:49896.service: Deactivated successfully. Sep 13 00:18:07.875180 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:18:07.876156 systemd-logind[1543]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:18:07.879496 systemd[1]: Started sshd@14-10.0.0.20:22-10.0.0.1:49912.service - OpenSSH per-connection server daemon (10.0.0.1:49912). Sep 13 00:18:07.880429 systemd-logind[1543]: Removed session 14. Sep 13 00:18:07.940953 sshd[5322]: Accepted publickey for core from 10.0.0.1 port 49912 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:18:07.943347 sshd-session[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:07.949091 systemd-logind[1543]: New session 15 of user core. Sep 13 00:18:07.958086 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:18:08.109568 sshd[5324]: Connection closed by 10.0.0.1 port 49912 Sep 13 00:18:08.109905 sshd-session[5322]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:08.114732 systemd[1]: sshd@14-10.0.0.20:22-10.0.0.1:49912.service: Deactivated successfully. Sep 13 00:18:08.116848 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:18:08.117675 systemd-logind[1543]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:18:08.118859 systemd-logind[1543]: Removed session 15. Sep 13 00:18:08.238051 containerd[1557]: time="2025-09-13T00:18:08.237999985Z" level=info msg="TaskExit event in podsandbox handler container_id:\"67d82ddf8ab7fc83c3f37a3be9270dbca50ebaa4b00d92effcbee9c0b7e827a3\" id:\"20c6029f8e5482708bcf45bbdaca46648755c5dcdb9219e74bad8bfa8a6aedfd\" pid:5348 exit_status:1 exited_at:{seconds:1757722688 nanos:237637038}" Sep 13 00:18:09.192199 containerd[1557]: time="2025-09-13T00:18:09.192142430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:09.193374 containerd[1557]: time="2025-09-13T00:18:09.193340260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:18:09.194890 containerd[1557]: time="2025-09-13T00:18:09.194835002Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:09.199576 containerd[1557]: time="2025-09-13T00:18:09.199538784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:09.200608 containerd[1557]: time="2025-09-13T00:18:09.200575939Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.785238794s" Sep 13 00:18:09.200608 containerd[1557]: time="2025-09-13T00:18:09.200607177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:18:09.201705 containerd[1557]: time="2025-09-13T00:18:09.201521671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:18:09.205844 containerd[1557]: time="2025-09-13T00:18:09.205317973Z" level=info msg="CreateContainer within sandbox \"f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:18:09.229590 containerd[1557]: time="2025-09-13T00:18:09.229130708Z" level=info msg="Container 56ab8c31b25c75921e6fb3e80e8bc377cafa7c01f173a370b9c9e36b5ef8014b: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:18:09.240493 containerd[1557]: time="2025-09-13T00:18:09.240431760Z" level=info msg="TaskExit event in podsandbox handler container_id:\"67d82ddf8ab7fc83c3f37a3be9270dbca50ebaa4b00d92effcbee9c0b7e827a3\" id:\"8793ef552e46791b350d7686ab0ec7028ffb52828f7f164b092b09e188e94a15\" pid:5376 exit_status:1 exited_at:{seconds:1757722689 nanos:240042392}" Sep 13 00:18:09.240897 containerd[1557]: time="2025-09-13T00:18:09.240858598Z" level=info msg="CreateContainer within sandbox \"f24aba97e336cef82903465c19d2295ab328984469f859b6fe142d8f71f13d4f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"56ab8c31b25c75921e6fb3e80e8bc377cafa7c01f173a370b9c9e36b5ef8014b\"" Sep 13 00:18:09.241567 containerd[1557]: time="2025-09-13T00:18:09.241510132Z" level=info msg="StartContainer for \"56ab8c31b25c75921e6fb3e80e8bc377cafa7c01f173a370b9c9e36b5ef8014b\"" Sep 13 00:18:09.243594 containerd[1557]: time="2025-09-13T00:18:09.243565126Z" level=info msg="connecting to shim 56ab8c31b25c75921e6fb3e80e8bc377cafa7c01f173a370b9c9e36b5ef8014b" address="unix:///run/containerd/s/a2ebcdd955134c1fed9b7fca06fe4fac261fc8717c45af2df82a7bd7bad1a221" protocol=ttrpc version=3 Sep 13 00:18:09.269322 systemd[1]: Started cri-containerd-56ab8c31b25c75921e6fb3e80e8bc377cafa7c01f173a370b9c9e36b5ef8014b.scope - libcontainer container 56ab8c31b25c75921e6fb3e80e8bc377cafa7c01f173a370b9c9e36b5ef8014b. Sep 13 00:18:09.538445 containerd[1557]: time="2025-09-13T00:18:09.538398268Z" level=info msg="StartContainer for \"56ab8c31b25c75921e6fb3e80e8bc377cafa7c01f173a370b9c9e36b5ef8014b\" returns successfully" Sep 13 00:18:10.229134 kubelet[2747]: I0913 00:18:10.229074 2747 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:18:10.229134 kubelet[2747]: I0913 00:18:10.229136 2747 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:18:10.254701 kubelet[2747]: I0913 00:18:10.254631 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wxjn8" podStartSLOduration=30.144906104 podStartE2EDuration="50.25460649s" podCreationTimestamp="2025-09-13 00:17:20 +0000 UTC" firstStartedPulling="2025-09-13 00:17:49.091636574 +0000 UTC m=+50.067503093" lastFinishedPulling="2025-09-13 00:18:09.20133696 +0000 UTC m=+70.177203479" observedRunningTime="2025-09-13 00:18:10.253485846 +0000 UTC m=+71.229352365" watchObservedRunningTime="2025-09-13 00:18:10.25460649 +0000 UTC m=+71.230473009" Sep 13 00:18:13.127645 systemd[1]: Started sshd@15-10.0.0.20:22-10.0.0.1:51848.service - OpenSSH per-connection server daemon (10.0.0.1:51848). Sep 13 00:18:13.201324 sshd[5436]: Accepted publickey for core from 10.0.0.1 port 51848 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:18:13.203692 sshd-session[5436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:13.209393 systemd-logind[1543]: New session 16 of user core. Sep 13 00:18:13.220317 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:18:13.402064 sshd[5438]: Connection closed by 10.0.0.1 port 51848 Sep 13 00:18:13.401958 sshd-session[5436]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:13.407861 systemd[1]: sshd@15-10.0.0.20:22-10.0.0.1:51848.service: Deactivated successfully. Sep 13 00:18:13.410690 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:18:13.412979 systemd-logind[1543]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:18:13.415146 systemd-logind[1543]: Removed session 16. Sep 13 00:18:14.119051 kubelet[2747]: E0913 00:18:14.118992 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:14.976452 containerd[1557]: time="2025-09-13T00:18:14.976372445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:14.982375 containerd[1557]: time="2025-09-13T00:18:14.982317345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:18:14.991932 containerd[1557]: time="2025-09-13T00:18:14.991816339Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:15.004905 containerd[1557]: time="2025-09-13T00:18:15.004831556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:15.005700 containerd[1557]: time="2025-09-13T00:18:15.005656572Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 5.80410224s" Sep 13 00:18:15.005700 containerd[1557]: time="2025-09-13T00:18:15.005690085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:18:15.006885 containerd[1557]: time="2025-09-13T00:18:15.006763563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:18:15.008212 containerd[1557]: time="2025-09-13T00:18:15.008144835Z" level=info msg="CreateContainer within sandbox \"2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:18:15.121170 containerd[1557]: time="2025-09-13T00:18:15.121121599Z" level=info msg="Container 8a173d38d9442752b31b12e896671264bb6a00e99dda93093f621d717a16372d: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:18:15.194787 containerd[1557]: time="2025-09-13T00:18:15.194689678Z" level=info msg="CreateContainer within sandbox \"2b0697238783cfe68e9806ca4b2705d7aed984ec73f3db79c9ca9b76f757373e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8a173d38d9442752b31b12e896671264bb6a00e99dda93093f621d717a16372d\"" Sep 13 00:18:15.195622 containerd[1557]: time="2025-09-13T00:18:15.195576080Z" level=info msg="StartContainer for \"8a173d38d9442752b31b12e896671264bb6a00e99dda93093f621d717a16372d\"" Sep 13 00:18:15.197126 containerd[1557]: time="2025-09-13T00:18:15.197078902Z" level=info msg="connecting to shim 8a173d38d9442752b31b12e896671264bb6a00e99dda93093f621d717a16372d" address="unix:///run/containerd/s/d29c111b673f1468d7838c88dae1a6f81154efe8bd8cf1db7e732395eb693c1c" protocol=ttrpc version=3 Sep 13 00:18:15.225096 systemd[1]: Started cri-containerd-8a173d38d9442752b31b12e896671264bb6a00e99dda93093f621d717a16372d.scope - libcontainer container 8a173d38d9442752b31b12e896671264bb6a00e99dda93093f621d717a16372d. Sep 13 00:18:15.287063 containerd[1557]: time="2025-09-13T00:18:15.286897704Z" level=info msg="StartContainer for \"8a173d38d9442752b31b12e896671264bb6a00e99dda93093f621d717a16372d\" returns successfully" Sep 13 00:18:16.505947 kubelet[2747]: I0913 00:18:16.505789 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-568d757654-xnwhr" podStartSLOduration=39.092442085 podStartE2EDuration="59.505767909s" podCreationTimestamp="2025-09-13 00:17:17 +0000 UTC" firstStartedPulling="2025-09-13 00:17:54.593309656 +0000 UTC m=+55.569176175" lastFinishedPulling="2025-09-13 00:18:15.00663548 +0000 UTC m=+75.982501999" observedRunningTime="2025-09-13 00:18:16.505270414 +0000 UTC m=+77.481136943" watchObservedRunningTime="2025-09-13 00:18:16.505767909 +0000 UTC m=+77.481634428" Sep 13 00:18:18.093158 containerd[1557]: time="2025-09-13T00:18:18.093101490Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b8d53b6a4de75e5ff052907602d68e4544cd780c6dfc34b26f95e29fa3fcecd\" id:\"9ddb104c21519a0acda5fa77f3e02b896658c57ecf921851049ed7c40ba19e07\" pid:5511 exit_status:1 exited_at:{seconds:1757722698 nanos:92564219}" Sep 13 00:18:18.168455 kubelet[2747]: I0913 00:18:18.168421 2747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:18:18.430130 systemd[1]: Started sshd@16-10.0.0.20:22-10.0.0.1:51858.service - OpenSSH per-connection server daemon (10.0.0.1:51858). Sep 13 00:18:18.543260 sshd[5525]: Accepted publickey for core from 10.0.0.1 port 51858 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:18:18.546000 sshd-session[5525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:18.551302 systemd-logind[1543]: New session 17 of user core. Sep 13 00:18:18.561084 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:18:18.713939 sshd[5527]: Connection closed by 10.0.0.1 port 51858 Sep 13 00:18:18.715052 sshd-session[5525]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:18.719572 systemd[1]: sshd@16-10.0.0.20:22-10.0.0.1:51858.service: Deactivated successfully. Sep 13 00:18:18.721750 systemd-logind[1543]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:18:18.726406 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:18:18.733003 systemd-logind[1543]: Removed session 17. Sep 13 00:18:19.931071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1598200756.mount: Deactivated successfully. Sep 13 00:18:20.982218 containerd[1557]: time="2025-09-13T00:18:20.982126647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:21.006477 containerd[1557]: time="2025-09-13T00:18:21.006414514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:18:21.077305 containerd[1557]: time="2025-09-13T00:18:21.077216658Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:21.170571 containerd[1557]: time="2025-09-13T00:18:21.170403800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:21.171934 containerd[1557]: time="2025-09-13T00:18:21.171792791Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 6.164969605s" Sep 13 00:18:21.171934 containerd[1557]: time="2025-09-13T00:18:21.171828649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:18:21.173606 containerd[1557]: time="2025-09-13T00:18:21.173567163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:18:21.175474 containerd[1557]: time="2025-09-13T00:18:21.175427309Z" level=info msg="CreateContainer within sandbox \"75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:18:21.369085 containerd[1557]: time="2025-09-13T00:18:21.368789285Z" level=info msg="Container 5508729324766a5980ef9834c7bf52b7a3518a37149790e9bfcd6bdb486eed49: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:18:21.473600 containerd[1557]: time="2025-09-13T00:18:21.473524968Z" level=info msg="CreateContainer within sandbox \"75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5508729324766a5980ef9834c7bf52b7a3518a37149790e9bfcd6bdb486eed49\"" Sep 13 00:18:21.474315 containerd[1557]: time="2025-09-13T00:18:21.474276196Z" level=info msg="StartContainer for \"5508729324766a5980ef9834c7bf52b7a3518a37149790e9bfcd6bdb486eed49\"" Sep 13 00:18:21.475965 containerd[1557]: time="2025-09-13T00:18:21.475930580Z" level=info msg="connecting to shim 5508729324766a5980ef9834c7bf52b7a3518a37149790e9bfcd6bdb486eed49" address="unix:///run/containerd/s/2b874cbadf84c92b061816214f3970e6f67166576bab1c4c3fcc812616c9bbd6" protocol=ttrpc version=3 Sep 13 00:18:21.504096 systemd[1]: Started cri-containerd-5508729324766a5980ef9834c7bf52b7a3518a37149790e9bfcd6bdb486eed49.scope - libcontainer container 5508729324766a5980ef9834c7bf52b7a3518a37149790e9bfcd6bdb486eed49. Sep 13 00:18:21.606345 containerd[1557]: time="2025-09-13T00:18:21.605470118Z" level=info msg="StartContainer for \"5508729324766a5980ef9834c7bf52b7a3518a37149790e9bfcd6bdb486eed49\" returns successfully" Sep 13 00:18:21.853170 containerd[1557]: time="2025-09-13T00:18:21.853084773Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:18:21.864241 containerd[1557]: time="2025-09-13T00:18:21.864177517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:18:21.870429 containerd[1557]: time="2025-09-13T00:18:21.870381660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 696.780211ms" Sep 13 00:18:21.870429 containerd[1557]: time="2025-09-13T00:18:21.870430913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:18:21.873251 containerd[1557]: time="2025-09-13T00:18:21.872695618Z" level=info msg="CreateContainer within sandbox \"ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:18:21.936695 containerd[1557]: time="2025-09-13T00:18:21.936629908Z" level=info msg="Container 055281173da18d4a79fb9e791b88cc59e4bcec0fe74c4fdc08d4196321c72e72: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:18:21.986976 containerd[1557]: time="2025-09-13T00:18:21.986904175Z" level=info msg="CreateContainer within sandbox \"ef2c0fd4a3775e6f51fcfe892f77cd70ffdb1c657d390694fa271e3e2a940bfa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"055281173da18d4a79fb9e791b88cc59e4bcec0fe74c4fdc08d4196321c72e72\"" Sep 13 00:18:21.987771 containerd[1557]: time="2025-09-13T00:18:21.987726839Z" level=info msg="StartContainer for \"055281173da18d4a79fb9e791b88cc59e4bcec0fe74c4fdc08d4196321c72e72\"" Sep 13 00:18:21.989297 containerd[1557]: time="2025-09-13T00:18:21.989261165Z" level=info msg="connecting to shim 055281173da18d4a79fb9e791b88cc59e4bcec0fe74c4fdc08d4196321c72e72" address="unix:///run/containerd/s/ab8e88d7dec3bf08596fa0e413200c655c4b9a17be1836f69ae8db3546785f4b" protocol=ttrpc version=3 Sep 13 00:18:22.020084 systemd[1]: Started cri-containerd-055281173da18d4a79fb9e791b88cc59e4bcec0fe74c4fdc08d4196321c72e72.scope - libcontainer container 055281173da18d4a79fb9e791b88cc59e4bcec0fe74c4fdc08d4196321c72e72. Sep 13 00:18:22.167436 containerd[1557]: time="2025-09-13T00:18:22.167284796Z" level=info msg="StartContainer for \"055281173da18d4a79fb9e791b88cc59e4bcec0fe74c4fdc08d4196321c72e72\" returns successfully" Sep 13 00:18:22.222574 containerd[1557]: time="2025-09-13T00:18:22.222509615Z" level=info msg="StopContainer for \"5508729324766a5980ef9834c7bf52b7a3518a37149790e9bfcd6bdb486eed49\" with timeout 30 (s)" Sep 13 00:18:22.224667 containerd[1557]: time="2025-09-13T00:18:22.222801631Z" level=info msg="StopContainer for \"4193c6d354e8c979cb55e706ef0c703c2e95ec64e3d326e995d00f1b8ec34ed3\" with timeout 30 (s)" Sep 13 00:18:22.239483 kubelet[2747]: I0913 00:18:22.237669 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-568d757654-spds8" podStartSLOduration=40.919869804 podStartE2EDuration="1m5.237641914s" podCreationTimestamp="2025-09-13 00:17:17 +0000 UTC" firstStartedPulling="2025-09-13 00:17:57.553554014 +0000 UTC m=+58.529420533" lastFinishedPulling="2025-09-13 00:18:21.871326124 +0000 UTC m=+82.847192643" observedRunningTime="2025-09-13 00:18:22.237570118 +0000 UTC m=+83.213436637" watchObservedRunningTime="2025-09-13 00:18:22.237641914 +0000 UTC m=+83.213508433" Sep 13 00:18:22.240489 containerd[1557]: time="2025-09-13T00:18:22.238201137Z" level=info msg="Stop container \"5508729324766a5980ef9834c7bf52b7a3518a37149790e9bfcd6bdb486eed49\" with signal terminated" Sep 13 00:18:22.241574 containerd[1557]: time="2025-09-13T00:18:22.241535516Z" level=info msg="Stop container \"4193c6d354e8c979cb55e706ef0c703c2e95ec64e3d326e995d00f1b8ec34ed3\" with signal terminated" Sep 13 00:18:22.265360 systemd[1]: cri-containerd-5508729324766a5980ef9834c7bf52b7a3518a37149790e9bfcd6bdb486eed49.scope: Deactivated successfully. Sep 13 00:18:22.268401 containerd[1557]: time="2025-09-13T00:18:22.268360124Z" level=info msg="received exit event container_id:\"5508729324766a5980ef9834c7bf52b7a3518a37149790e9bfcd6bdb486eed49\" id:\"5508729324766a5980ef9834c7bf52b7a3518a37149790e9bfcd6bdb486eed49\" pid:5566 exit_status:2 exited_at:{seconds:1757722702 nanos:267777356}" Sep 13 00:18:22.268827 containerd[1557]: time="2025-09-13T00:18:22.268774381Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5508729324766a5980ef9834c7bf52b7a3518a37149790e9bfcd6bdb486eed49\" id:\"5508729324766a5980ef9834c7bf52b7a3518a37149790e9bfcd6bdb486eed49\" pid:5566 exit_status:2 exited_at:{seconds:1757722702 nanos:267777356}" Sep 13 00:18:22.275049 systemd[1]: cri-containerd-4193c6d354e8c979cb55e706ef0c703c2e95ec64e3d326e995d00f1b8ec34ed3.scope: Deactivated successfully. Sep 13 00:18:22.277757 containerd[1557]: time="2025-09-13T00:18:22.277701392Z" level=info msg="received exit event container_id:\"4193c6d354e8c979cb55e706ef0c703c2e95ec64e3d326e995d00f1b8ec34ed3\" id:\"4193c6d354e8c979cb55e706ef0c703c2e95ec64e3d326e995d00f1b8ec34ed3\" pid:4936 exited_at:{seconds:1757722702 nanos:277421650}" Sep 13 00:18:22.278648 containerd[1557]: time="2025-09-13T00:18:22.278617694Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4193c6d354e8c979cb55e706ef0c703c2e95ec64e3d326e995d00f1b8ec34ed3\" id:\"4193c6d354e8c979cb55e706ef0c703c2e95ec64e3d326e995d00f1b8ec34ed3\" pid:4936 exited_at:{seconds:1757722702 nanos:277421650}" Sep 13 00:18:22.287489 kubelet[2747]: I0913 00:18:22.287431 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5cb844f4c4-j5lmn" podStartSLOduration=28.652626483 podStartE2EDuration="59.287409438s" podCreationTimestamp="2025-09-13 00:17:23 +0000 UTC" firstStartedPulling="2025-09-13 00:17:50.538366604 +0000 UTC m=+51.514233123" lastFinishedPulling="2025-09-13 00:18:21.173149559 +0000 UTC m=+82.149016078" observedRunningTime="2025-09-13 00:18:22.285475952 +0000 UTC m=+83.261342471" watchObservedRunningTime="2025-09-13 00:18:22.287409438 +0000 UTC m=+83.263275947" Sep 13 00:18:22.371072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5508729324766a5980ef9834c7bf52b7a3518a37149790e9bfcd6bdb486eed49-rootfs.mount: Deactivated successfully. Sep 13 00:18:22.371200 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4193c6d354e8c979cb55e706ef0c703c2e95ec64e3d326e995d00f1b8ec34ed3-rootfs.mount: Deactivated successfully. Sep 13 00:18:22.867001 containerd[1557]: time="2025-09-13T00:18:22.866941743Z" level=info msg="StopContainer for \"5508729324766a5980ef9834c7bf52b7a3518a37149790e9bfcd6bdb486eed49\" returns successfully" Sep 13 00:18:22.876787 containerd[1557]: time="2025-09-13T00:18:22.876375317Z" level=info msg="StopContainer for \"4193c6d354e8c979cb55e706ef0c703c2e95ec64e3d326e995d00f1b8ec34ed3\" returns successfully" Sep 13 00:18:22.877556 containerd[1557]: time="2025-09-13T00:18:22.877513291Z" level=info msg="StopPodSandbox for \"75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719\"" Sep 13 00:18:22.895796 containerd[1557]: time="2025-09-13T00:18:22.895725094Z" level=info msg="Container to stop \"4193c6d354e8c979cb55e706ef0c703c2e95ec64e3d326e995d00f1b8ec34ed3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:18:22.895796 containerd[1557]: time="2025-09-13T00:18:22.895782693Z" level=info msg="Container to stop \"5508729324766a5980ef9834c7bf52b7a3518a37149790e9bfcd6bdb486eed49\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:18:22.909397 systemd[1]: cri-containerd-75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719.scope: Deactivated successfully. Sep 13 00:18:22.911460 containerd[1557]: time="2025-09-13T00:18:22.911082310Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719\" id:\"75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719\" pid:4513 exit_status:137 exited_at:{seconds:1757722702 nanos:910633927}" Sep 13 00:18:22.958178 containerd[1557]: time="2025-09-13T00:18:22.958122379Z" level=info msg="shim disconnected" id=75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719 namespace=k8s.io Sep 13 00:18:22.958178 containerd[1557]: time="2025-09-13T00:18:22.958171452Z" level=warning msg="cleaning up after shim disconnected" id=75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719 namespace=k8s.io Sep 13 00:18:22.964316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719-rootfs.mount: Deactivated successfully. Sep 13 00:18:22.969592 containerd[1557]: time="2025-09-13T00:18:22.958182303Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:18:22.975177 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719-shm.mount: Deactivated successfully. Sep 13 00:18:22.989576 containerd[1557]: time="2025-09-13T00:18:22.989481637Z" level=info msg="received exit event sandbox_id:\"75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719\" exit_status:137 exited_at:{seconds:1757722702 nanos:910633927}" Sep 13 00:18:23.193801 kubelet[2747]: I0913 00:18:23.193409 2747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Sep 13 00:18:23.256708 systemd-networkd[1459]: cali8a6915932d0: Link DOWN Sep 13 00:18:23.256717 systemd-networkd[1459]: cali8a6915932d0: Lost carrier Sep 13 00:18:23.403544 containerd[1557]: 2025-09-13 00:18:23.252 [INFO][5711] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Sep 13 00:18:23.403544 containerd[1557]: 2025-09-13 00:18:23.254 [INFO][5711] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" iface="eth0" netns="/var/run/netns/cni-13c861d4-fd48-47f0-8491-0f9817a55deb" Sep 13 00:18:23.403544 containerd[1557]: 2025-09-13 00:18:23.255 [INFO][5711] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" iface="eth0" netns="/var/run/netns/cni-13c861d4-fd48-47f0-8491-0f9817a55deb" Sep 13 00:18:23.403544 containerd[1557]: 2025-09-13 00:18:23.265 [INFO][5711] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" after=10.148087ms iface="eth0" netns="/var/run/netns/cni-13c861d4-fd48-47f0-8491-0f9817a55deb" Sep 13 00:18:23.403544 containerd[1557]: 2025-09-13 00:18:23.265 [INFO][5711] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Sep 13 00:18:23.403544 containerd[1557]: 2025-09-13 00:18:23.265 [INFO][5711] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Sep 13 00:18:23.403544 containerd[1557]: 2025-09-13 00:18:23.303 [INFO][5725] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" HandleID="k8s-pod-network.75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Workload="localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0" Sep 13 00:18:23.403544 containerd[1557]: 2025-09-13 00:18:23.303 [INFO][5725] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:23.403544 containerd[1557]: 2025-09-13 00:18:23.303 [INFO][5725] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:23.403544 containerd[1557]: 2025-09-13 00:18:23.389 [INFO][5725] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" HandleID="k8s-pod-network.75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Workload="localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0" Sep 13 00:18:23.403544 containerd[1557]: 2025-09-13 00:18:23.389 [INFO][5725] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" HandleID="k8s-pod-network.75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Workload="localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0" Sep 13 00:18:23.403544 containerd[1557]: 2025-09-13 00:18:23.396 [INFO][5725] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:23.403544 containerd[1557]: 2025-09-13 00:18:23.399 [INFO][5711] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Sep 13 00:18:23.407770 systemd[1]: run-netns-cni\x2d13c861d4\x2dfd48\x2d47f0\x2d8491\x2d0f9817a55deb.mount: Deactivated successfully. Sep 13 00:18:23.415243 containerd[1557]: time="2025-09-13T00:18:23.415137107Z" level=info msg="TearDown network for sandbox \"75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719\" successfully" Sep 13 00:18:23.415243 containerd[1557]: time="2025-09-13T00:18:23.415227539Z" level=info msg="StopPodSandbox for \"75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719\" returns successfully" Sep 13 00:18:23.556520 kubelet[2747]: I0913 00:18:23.556474 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10f255ff-eb3e-4053-b0b8-b528af157243-whisker-ca-bundle\") pod \"10f255ff-eb3e-4053-b0b8-b528af157243\" (UID: \"10f255ff-eb3e-4053-b0b8-b528af157243\") " Sep 13 00:18:23.556520 kubelet[2747]: I0913 00:18:23.556522 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/10f255ff-eb3e-4053-b0b8-b528af157243-whisker-backend-key-pair\") pod \"10f255ff-eb3e-4053-b0b8-b528af157243\" (UID: \"10f255ff-eb3e-4053-b0b8-b528af157243\") " Sep 13 00:18:23.556520 kubelet[2747]: I0913 00:18:23.556543 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clwsp\" (UniqueName: \"kubernetes.io/projected/10f255ff-eb3e-4053-b0b8-b528af157243-kube-api-access-clwsp\") pod \"10f255ff-eb3e-4053-b0b8-b528af157243\" (UID: \"10f255ff-eb3e-4053-b0b8-b528af157243\") " Sep 13 00:18:23.557118 kubelet[2747]: I0913 00:18:23.557089 2747 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10f255ff-eb3e-4053-b0b8-b528af157243-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "10f255ff-eb3e-4053-b0b8-b528af157243" (UID: "10f255ff-eb3e-4053-b0b8-b528af157243"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:18:23.564771 systemd[1]: var-lib-kubelet-pods-10f255ff\x2deb3e\x2d4053\x2db0b8\x2db528af157243-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dclwsp.mount: Deactivated successfully. Sep 13 00:18:23.565179 kubelet[2747]: I0913 00:18:23.565098 2747 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10f255ff-eb3e-4053-b0b8-b528af157243-kube-api-access-clwsp" (OuterVolumeSpecName: "kube-api-access-clwsp") pod "10f255ff-eb3e-4053-b0b8-b528af157243" (UID: "10f255ff-eb3e-4053-b0b8-b528af157243"). InnerVolumeSpecName "kube-api-access-clwsp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:18:23.565179 kubelet[2747]: I0913 00:18:23.565122 2747 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f255ff-eb3e-4053-b0b8-b528af157243-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "10f255ff-eb3e-4053-b0b8-b528af157243" (UID: "10f255ff-eb3e-4053-b0b8-b528af157243"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:18:23.565264 systemd[1]: var-lib-kubelet-pods-10f255ff\x2deb3e\x2d4053\x2db0b8\x2db528af157243-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:18:23.657320 kubelet[2747]: I0913 00:18:23.657259 2747 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10f255ff-eb3e-4053-b0b8-b528af157243-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 13 00:18:23.657320 kubelet[2747]: I0913 00:18:23.657297 2747 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/10f255ff-eb3e-4053-b0b8-b528af157243-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 13 00:18:23.657320 kubelet[2747]: I0913 00:18:23.657307 2747 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-clwsp\" (UniqueName: \"kubernetes.io/projected/10f255ff-eb3e-4053-b0b8-b528af157243-kube-api-access-clwsp\") on node \"localhost\" DevicePath \"\"" Sep 13 00:18:23.725840 systemd[1]: Started sshd@17-10.0.0.20:22-10.0.0.1:50620.service - OpenSSH per-connection server daemon (10.0.0.1:50620). Sep 13 00:18:23.805789 sshd[5746]: Accepted publickey for core from 10.0.0.1 port 50620 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:18:23.808863 sshd-session[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:23.814474 systemd-logind[1543]: New session 18 of user core. Sep 13 00:18:23.824069 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:18:23.981626 sshd[5749]: Connection closed by 10.0.0.1 port 50620 Sep 13 00:18:23.981833 sshd-session[5746]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:23.990224 systemd-logind[1543]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:18:23.990599 systemd[1]: sshd@17-10.0.0.20:22-10.0.0.1:50620.service: Deactivated successfully. Sep 13 00:18:23.993714 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:18:23.995778 systemd-logind[1543]: Removed session 18. Sep 13 00:18:24.202519 systemd[1]: Removed slice kubepods-besteffort-pod10f255ff_eb3e_4053_b0b8_b528af157243.slice - libcontainer container kubepods-besteffort-pod10f255ff_eb3e_4053_b0b8_b528af157243.slice. Sep 13 00:18:25.121610 kubelet[2747]: I0913 00:18:25.121558 2747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10f255ff-eb3e-4053-b0b8-b528af157243" path="/var/lib/kubelet/pods/10f255ff-eb3e-4053-b0b8-b528af157243/volumes" Sep 13 00:18:26.118540 kubelet[2747]: E0913 00:18:26.118465 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:26.141367 containerd[1557]: time="2025-09-13T00:18:26.141310584Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53d631dc886b32aa2b6eee0341195cc255aff5b55b2451928dbbda91b41bdd7e\" id:\"f81793cfc163141c53ab5ad0de630bdd179dd0f6f082c06739afff9ae9d48cc8\" pid:5778 exited_at:{seconds:1757722706 nanos:140968031}" Sep 13 00:18:27.118490 kubelet[2747]: E0913 00:18:27.118443 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:27.200276 containerd[1557]: time="2025-09-13T00:18:27.200162168Z" level=info msg="TaskExit event in podsandbox handler container_id:\"67d82ddf8ab7fc83c3f37a3be9270dbca50ebaa4b00d92effcbee9c0b7e827a3\" id:\"b065113e117c3f31465b1ea4e28395b0814ce80a81e4e2a335b4fe3c9497634a\" pid:5800 exited_at:{seconds:1757722707 nanos:199533181}" Sep 13 00:18:29.003641 systemd[1]: Started sshd@18-10.0.0.20:22-10.0.0.1:50636.service - OpenSSH per-connection server daemon (10.0.0.1:50636). Sep 13 00:18:29.080515 sshd[5813]: Accepted publickey for core from 10.0.0.1 port 50636 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:18:29.082822 sshd-session[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:29.090511 systemd-logind[1543]: New session 19 of user core. Sep 13 00:18:29.101068 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:18:29.118841 kubelet[2747]: E0913 00:18:29.118789 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:29.495823 sshd[5815]: Connection closed by 10.0.0.1 port 50636 Sep 13 00:18:29.496219 sshd-session[5813]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:29.502435 systemd[1]: sshd@18-10.0.0.20:22-10.0.0.1:50636.service: Deactivated successfully. Sep 13 00:18:29.505206 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:18:29.506210 systemd-logind[1543]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:18:29.508464 systemd-logind[1543]: Removed session 19. Sep 13 00:18:34.511775 systemd[1]: Started sshd@19-10.0.0.20:22-10.0.0.1:37606.service - OpenSSH per-connection server daemon (10.0.0.1:37606). Sep 13 00:18:34.567437 sshd[5837]: Accepted publickey for core from 10.0.0.1 port 37606 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:18:34.569417 sshd-session[5837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:34.574974 systemd-logind[1543]: New session 20 of user core. Sep 13 00:18:34.582081 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:18:34.704764 sshd[5839]: Connection closed by 10.0.0.1 port 37606 Sep 13 00:18:34.705158 sshd-session[5837]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:34.719489 systemd[1]: sshd@19-10.0.0.20:22-10.0.0.1:37606.service: Deactivated successfully. Sep 13 00:18:34.721845 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:18:34.722932 systemd-logind[1543]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:18:34.726583 systemd[1]: Started sshd@20-10.0.0.20:22-10.0.0.1:37618.service - OpenSSH per-connection server daemon (10.0.0.1:37618). Sep 13 00:18:34.727474 systemd-logind[1543]: Removed session 20. Sep 13 00:18:34.786372 sshd[5852]: Accepted publickey for core from 10.0.0.1 port 37618 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:18:34.787868 sshd-session[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:34.792981 systemd-logind[1543]: New session 21 of user core. Sep 13 00:18:34.805152 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:18:35.184476 sshd[5854]: Connection closed by 10.0.0.1 port 37618 Sep 13 00:18:35.185529 sshd-session[5852]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:35.196025 systemd[1]: sshd@20-10.0.0.20:22-10.0.0.1:37618.service: Deactivated successfully. Sep 13 00:18:35.198288 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:18:35.199377 systemd-logind[1543]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:18:35.202714 systemd[1]: Started sshd@21-10.0.0.20:22-10.0.0.1:37624.service - OpenSSH per-connection server daemon (10.0.0.1:37624). Sep 13 00:18:35.204103 systemd-logind[1543]: Removed session 21. Sep 13 00:18:35.272007 sshd[5865]: Accepted publickey for core from 10.0.0.1 port 37624 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:18:35.274038 sshd-session[5865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:35.279685 systemd-logind[1543]: New session 22 of user core. Sep 13 00:18:35.293250 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:18:35.980627 sshd[5867]: Connection closed by 10.0.0.1 port 37624 Sep 13 00:18:35.983066 sshd-session[5865]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:35.998230 systemd[1]: sshd@21-10.0.0.20:22-10.0.0.1:37624.service: Deactivated successfully. Sep 13 00:18:36.006560 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:18:36.009300 systemd-logind[1543]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:18:36.016868 systemd-logind[1543]: Removed session 22. Sep 13 00:18:36.021093 systemd[1]: Started sshd@22-10.0.0.20:22-10.0.0.1:37638.service - OpenSSH per-connection server daemon (10.0.0.1:37638). Sep 13 00:18:36.083106 sshd[5892]: Accepted publickey for core from 10.0.0.1 port 37638 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:18:36.084434 sshd-session[5892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:36.090103 systemd-logind[1543]: New session 23 of user core. Sep 13 00:18:36.098180 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:18:36.392640 sshd[5896]: Connection closed by 10.0.0.1 port 37638 Sep 13 00:18:36.394462 sshd-session[5892]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:36.402615 systemd[1]: sshd@22-10.0.0.20:22-10.0.0.1:37638.service: Deactivated successfully. Sep 13 00:18:36.404800 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:18:36.405654 systemd-logind[1543]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:18:36.409638 systemd[1]: Started sshd@23-10.0.0.20:22-10.0.0.1:37654.service - OpenSSH per-connection server daemon (10.0.0.1:37654). Sep 13 00:18:36.410698 systemd-logind[1543]: Removed session 23. Sep 13 00:18:36.461491 sshd[5910]: Accepted publickey for core from 10.0.0.1 port 37654 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:18:36.463283 sshd-session[5910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:36.468666 systemd-logind[1543]: New session 24 of user core. Sep 13 00:18:36.486078 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:18:36.623354 sshd[5912]: Connection closed by 10.0.0.1 port 37654 Sep 13 00:18:36.623783 sshd-session[5910]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:36.631068 systemd[1]: sshd@23-10.0.0.20:22-10.0.0.1:37654.service: Deactivated successfully. Sep 13 00:18:36.634358 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:18:36.635578 systemd-logind[1543]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:18:36.637787 systemd-logind[1543]: Removed session 24. Sep 13 00:18:39.242486 containerd[1557]: time="2025-09-13T00:18:39.242435694Z" level=info msg="TaskExit event in podsandbox handler container_id:\"67d82ddf8ab7fc83c3f37a3be9270dbca50ebaa4b00d92effcbee9c0b7e827a3\" id:\"3df494e3373570ccaca6895f84b72cbbac3eeebe84c1078e52780e554cf7bc8a\" pid:5939 exited_at:{seconds:1757722719 nanos:242117979}" Sep 13 00:18:41.647694 systemd[1]: Started sshd@24-10.0.0.20:22-10.0.0.1:54892.service - OpenSSH per-connection server daemon (10.0.0.1:54892). Sep 13 00:18:41.711678 sshd[5953]: Accepted publickey for core from 10.0.0.1 port 54892 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:18:41.713892 sshd-session[5953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:41.719833 systemd-logind[1543]: New session 25 of user core. Sep 13 00:18:41.729089 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:18:41.854027 sshd[5955]: Connection closed by 10.0.0.1 port 54892 Sep 13 00:18:41.854430 sshd-session[5953]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:41.859878 systemd[1]: sshd@24-10.0.0.20:22-10.0.0.1:54892.service: Deactivated successfully. Sep 13 00:18:41.862570 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:18:41.863688 systemd-logind[1543]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:18:41.865296 systemd-logind[1543]: Removed session 25. Sep 13 00:18:46.872245 systemd[1]: Started sshd@25-10.0.0.20:22-10.0.0.1:54906.service - OpenSSH per-connection server daemon (10.0.0.1:54906). Sep 13 00:18:46.932608 sshd[5973]: Accepted publickey for core from 10.0.0.1 port 54906 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:18:46.934657 sshd-session[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:46.940808 systemd-logind[1543]: New session 26 of user core. Sep 13 00:18:46.950081 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 00:18:47.075059 sshd[5975]: Connection closed by 10.0.0.1 port 54906 Sep 13 00:18:47.075416 sshd-session[5973]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:47.079409 systemd[1]: sshd@25-10.0.0.20:22-10.0.0.1:54906.service: Deactivated successfully. Sep 13 00:18:47.082137 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:18:47.084577 systemd-logind[1543]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:18:47.086972 systemd-logind[1543]: Removed session 26. Sep 13 00:18:47.996794 containerd[1557]: time="2025-09-13T00:18:47.996733649Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b8d53b6a4de75e5ff052907602d68e4544cd780c6dfc34b26f95e29fa3fcecd\" id:\"c0d59c13fc92be25cbfa32027261f42c5af9b52de4d16d80bb7429703018c009\" pid:5998 exited_at:{seconds:1757722727 nanos:996291214}" Sep 13 00:18:50.118466 kubelet[2747]: E0913 00:18:50.118405 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:52.093854 systemd[1]: Started sshd@26-10.0.0.20:22-10.0.0.1:50068.service - OpenSSH per-connection server daemon (10.0.0.1:50068). Sep 13 00:18:52.168101 sshd[6012]: Accepted publickey for core from 10.0.0.1 port 50068 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:18:52.170696 sshd-session[6012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:52.176262 systemd-logind[1543]: New session 27 of user core. Sep 13 00:18:52.184145 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 13 00:18:52.462495 sshd[6014]: Connection closed by 10.0.0.1 port 50068 Sep 13 00:18:52.464895 sshd-session[6012]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:52.471765 systemd-logind[1543]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:18:52.472027 systemd[1]: sshd@26-10.0.0.20:22-10.0.0.1:50068.service: Deactivated successfully. Sep 13 00:18:52.474696 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:18:52.478464 systemd-logind[1543]: Removed session 27. Sep 13 00:18:55.887793 containerd[1557]: time="2025-09-13T00:18:55.887748133Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53d631dc886b32aa2b6eee0341195cc255aff5b55b2451928dbbda91b41bdd7e\" id:\"b7ac833e4e81d13300cceb9e6819ef8692947ee78a028e6f417a3da90e7f9f50\" pid:6040 exited_at:{seconds:1757722735 nanos:887505651}" Sep 13 00:18:56.137129 containerd[1557]: time="2025-09-13T00:18:56.137075883Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53d631dc886b32aa2b6eee0341195cc255aff5b55b2451928dbbda91b41bdd7e\" id:\"d5738797d48182081a79e3b3ea5c5be87208d8bb80094dced1f38c9da22ce137\" pid:6061 exited_at:{seconds:1757722736 nanos:136701438}" Sep 13 00:18:57.477107 systemd[1]: Started sshd@27-10.0.0.20:22-10.0.0.1:50106.service - OpenSSH per-connection server daemon (10.0.0.1:50106). Sep 13 00:18:57.535470 sshd[6072]: Accepted publickey for core from 10.0.0.1 port 50106 ssh2: RSA SHA256:mlYU9m+a2feC4Sym7fN+EoNujIcjljhjZFU1t4NzJ4c Sep 13 00:18:57.537210 sshd-session[6072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:57.544027 systemd-logind[1543]: New session 28 of user core. Sep 13 00:18:57.552152 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 13 00:18:57.704937 sshd[6074]: Connection closed by 10.0.0.1 port 50106 Sep 13 00:18:57.705379 sshd-session[6072]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:57.713354 systemd[1]: sshd@27-10.0.0.20:22-10.0.0.1:50106.service: Deactivated successfully. Sep 13 00:18:57.715875 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 00:18:57.716943 systemd-logind[1543]: Session 28 logged out. Waiting for processes to exit. Sep 13 00:18:57.719073 systemd-logind[1543]: Removed session 28. Sep 13 00:18:59.113683 kubelet[2747]: I0913 00:18:59.113439 2747 scope.go:117] "RemoveContainer" containerID="4193c6d354e8c979cb55e706ef0c703c2e95ec64e3d326e995d00f1b8ec34ed3" Sep 13 00:18:59.117488 containerd[1557]: time="2025-09-13T00:18:59.117434890Z" level=info msg="RemoveContainer for \"4193c6d354e8c979cb55e706ef0c703c2e95ec64e3d326e995d00f1b8ec34ed3\"" Sep 13 00:18:59.133200 containerd[1557]: time="2025-09-13T00:18:59.133132492Z" level=info msg="RemoveContainer for \"4193c6d354e8c979cb55e706ef0c703c2e95ec64e3d326e995d00f1b8ec34ed3\" returns successfully" Sep 13 00:18:59.140956 kubelet[2747]: I0913 00:18:59.140613 2747 scope.go:117] "RemoveContainer" containerID="5508729324766a5980ef9834c7bf52b7a3518a37149790e9bfcd6bdb486eed49" Sep 13 00:18:59.144224 containerd[1557]: time="2025-09-13T00:18:59.144179771Z" level=info msg="RemoveContainer for \"5508729324766a5980ef9834c7bf52b7a3518a37149790e9bfcd6bdb486eed49\"" Sep 13 00:18:59.152036 containerd[1557]: time="2025-09-13T00:18:59.151981352Z" level=info msg="RemoveContainer for \"5508729324766a5980ef9834c7bf52b7a3518a37149790e9bfcd6bdb486eed49\" returns successfully" Sep 13 00:18:59.153616 containerd[1557]: time="2025-09-13T00:18:59.153587169Z" level=info msg="StopPodSandbox for \"75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719\"" Sep 13 00:18:59.312369 containerd[1557]: 2025-09-13 00:18:59.266 [WARNING][6099] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" WorkloadEndpoint="localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0" Sep 13 00:18:59.312369 containerd[1557]: 2025-09-13 00:18:59.266 [INFO][6099] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Sep 13 00:18:59.312369 containerd[1557]: 2025-09-13 00:18:59.266 [INFO][6099] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" iface="eth0" netns="" Sep 13 00:18:59.312369 containerd[1557]: 2025-09-13 00:18:59.267 [INFO][6099] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Sep 13 00:18:59.312369 containerd[1557]: 2025-09-13 00:18:59.267 [INFO][6099] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Sep 13 00:18:59.312369 containerd[1557]: 2025-09-13 00:18:59.293 [INFO][6108] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" HandleID="k8s-pod-network.75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Workload="localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0" Sep 13 00:18:59.312369 containerd[1557]: 2025-09-13 00:18:59.294 [INFO][6108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:59.312369 containerd[1557]: 2025-09-13 00:18:59.294 [INFO][6108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:59.312369 containerd[1557]: 2025-09-13 00:18:59.300 [WARNING][6108] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" HandleID="k8s-pod-network.75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Workload="localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0" Sep 13 00:18:59.312369 containerd[1557]: 2025-09-13 00:18:59.301 [INFO][6108] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" HandleID="k8s-pod-network.75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Workload="localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0" Sep 13 00:18:59.312369 containerd[1557]: 2025-09-13 00:18:59.303 [INFO][6108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:59.312369 containerd[1557]: 2025-09-13 00:18:59.309 [INFO][6099] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Sep 13 00:18:59.313531 containerd[1557]: time="2025-09-13T00:18:59.312423599Z" level=info msg="TearDown network for sandbox \"75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719\" successfully" Sep 13 00:18:59.313531 containerd[1557]: time="2025-09-13T00:18:59.312455319Z" level=info msg="StopPodSandbox for \"75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719\" returns successfully" Sep 13 00:18:59.313531 containerd[1557]: time="2025-09-13T00:18:59.313377281Z" level=info msg="RemovePodSandbox for \"75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719\"" Sep 13 00:18:59.313531 containerd[1557]: time="2025-09-13T00:18:59.313417457Z" level=info msg="Forcibly stopping sandbox \"75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719\"" Sep 13 00:18:59.405549 containerd[1557]: 2025-09-13 00:18:59.360 [WARNING][6125] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" WorkloadEndpoint="localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0" Sep 13 00:18:59.405549 containerd[1557]: 2025-09-13 00:18:59.360 [INFO][6125] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Sep 13 00:18:59.405549 containerd[1557]: 2025-09-13 00:18:59.360 [INFO][6125] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" iface="eth0" netns="" Sep 13 00:18:59.405549 containerd[1557]: 2025-09-13 00:18:59.360 [INFO][6125] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Sep 13 00:18:59.405549 containerd[1557]: 2025-09-13 00:18:59.360 [INFO][6125] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Sep 13 00:18:59.405549 containerd[1557]: 2025-09-13 00:18:59.389 [INFO][6134] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" HandleID="k8s-pod-network.75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Workload="localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0" Sep 13 00:18:59.405549 containerd[1557]: 2025-09-13 00:18:59.390 [INFO][6134] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:18:59.405549 containerd[1557]: 2025-09-13 00:18:59.390 [INFO][6134] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:18:59.405549 containerd[1557]: 2025-09-13 00:18:59.396 [WARNING][6134] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" HandleID="k8s-pod-network.75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Workload="localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0" Sep 13 00:18:59.405549 containerd[1557]: 2025-09-13 00:18:59.396 [INFO][6134] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" HandleID="k8s-pod-network.75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Workload="localhost-k8s-whisker--5cb844f4c4--j5lmn-eth0" Sep 13 00:18:59.405549 containerd[1557]: 2025-09-13 00:18:59.398 [INFO][6134] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:18:59.405549 containerd[1557]: 2025-09-13 00:18:59.402 [INFO][6125] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719" Sep 13 00:18:59.405549 containerd[1557]: time="2025-09-13T00:18:59.405471717Z" level=info msg="TearDown network for sandbox \"75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719\" successfully" Sep 13 00:18:59.413004 containerd[1557]: time="2025-09-13T00:18:59.412962954Z" level=info msg="Ensure that sandbox 75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719 in task-service has been cleanup successfully" Sep 13 00:18:59.418669 containerd[1557]: time="2025-09-13T00:18:59.418625550Z" level=info msg="RemovePodSandbox \"75b3ed57ed767a97eb061350fddd35f76221cdee3d8ad50269fac3af97cb3719\" returns successfully"