Sep 9 00:01:13.902273 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:08:00 -00 2025 Sep 9 00:01:13.902295 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=614c4ef85422d1b24559f161a4ad89cb626bb862dd1c761ed2d77c8a0665a1ae Sep 9 00:01:13.902306 kernel: BIOS-provided physical RAM map: Sep 9 00:01:13.902313 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 00:01:13.902319 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 00:01:13.902326 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 00:01:13.902333 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 00:01:13.902346 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 00:01:13.902353 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 9 00:01:13.902359 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 9 00:01:13.902366 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 9 00:01:13.902380 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 9 00:01:13.902386 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 9 00:01:13.902393 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 9 00:01:13.902401 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 9 00:01:13.902408 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 00:01:13.902418 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Sep 9 00:01:13.902425 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Sep 9 00:01:13.902432 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Sep 9 00:01:13.902439 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Sep 9 00:01:13.902446 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 9 00:01:13.902453 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 00:01:13.902460 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 00:01:13.902467 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 00:01:13.902476 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 9 00:01:13.902483 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 00:01:13.902490 kernel: NX (Execute Disable) protection: active Sep 9 00:01:13.902499 kernel: APIC: Static calls initialized Sep 9 00:01:13.902506 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Sep 9 00:01:13.902513 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Sep 9 00:01:13.902520 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Sep 9 00:01:13.902529 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Sep 9 00:01:13.902536 kernel: extended physical RAM map: Sep 9 00:01:13.902543 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 00:01:13.902550 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 00:01:13.902557 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 00:01:13.902564 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 00:01:13.902578 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 00:01:13.902585 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 9 00:01:13.902594 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 9 00:01:13.902605 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Sep 9 00:01:13.902613 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Sep 9 00:01:13.902621 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Sep 9 00:01:13.902628 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Sep 9 00:01:13.902635 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Sep 9 00:01:13.902645 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 9 00:01:13.902652 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 9 00:01:13.902659 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 9 00:01:13.902667 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 9 00:01:13.902674 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 00:01:13.902681 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Sep 9 00:01:13.902689 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Sep 9 00:01:13.902696 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Sep 9 00:01:13.902704 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Sep 9 00:01:13.902713 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 9 00:01:13.902721 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 00:01:13.902728 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 00:01:13.902735 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 00:01:13.902742 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 9 00:01:13.902750 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 00:01:13.902757 kernel: efi: EFI v2.7 by EDK II Sep 9 00:01:13.902765 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Sep 9 00:01:13.902772 kernel: random: crng init done Sep 9 00:01:13.902779 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 9 00:01:13.902787 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 9 00:01:13.902794 kernel: secureboot: Secure boot disabled Sep 9 00:01:13.902803 kernel: SMBIOS 2.8 present. Sep 9 00:01:13.902811 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 9 00:01:13.902818 kernel: Hypervisor detected: KVM Sep 9 00:01:13.902825 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 00:01:13.902833 kernel: kvm-clock: using sched offset of 2811535196 cycles Sep 9 00:01:13.902840 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 00:01:13.902848 kernel: tsc: Detected 2794.748 MHz processor Sep 9 00:01:13.902856 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 00:01:13.902864 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 00:01:13.902871 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 9 00:01:13.902881 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 9 00:01:13.902889 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 00:01:13.902896 kernel: Using GB pages for direct mapping Sep 9 00:01:13.902904 kernel: ACPI: Early table checksum verification disabled Sep 9 00:01:13.902911 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 9 00:01:13.902919 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:01:13.902926 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:01:13.902934 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:01:13.902941 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 9 00:01:13.902951 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:01:13.902959 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:01:13.902966 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:01:13.902985 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:01:13.903000 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 9 00:01:13.903008 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 9 00:01:13.903016 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 9 00:01:13.903023 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 9 00:01:13.903031 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 9 00:01:13.903041 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 9 00:01:13.903048 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 9 00:01:13.903056 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 9 00:01:13.903063 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 9 00:01:13.903071 kernel: No NUMA configuration found Sep 9 00:01:13.903078 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 9 00:01:13.903086 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Sep 9 00:01:13.903093 kernel: Zone ranges: Sep 9 00:01:13.903100 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 00:01:13.903110 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 9 00:01:13.903118 kernel: Normal empty Sep 9 00:01:13.903125 kernel: Movable zone start for each node Sep 9 00:01:13.903133 kernel: Early memory node ranges Sep 9 00:01:13.903140 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 9 00:01:13.903147 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 9 00:01:13.903155 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 9 00:01:13.903162 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 9 00:01:13.903170 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 9 00:01:13.903179 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 9 00:01:13.903187 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Sep 9 00:01:13.903194 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Sep 9 00:01:13.903202 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 9 00:01:13.903209 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:01:13.903217 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 9 00:01:13.903232 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 9 00:01:13.903242 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:01:13.903250 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 9 00:01:13.903257 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 9 00:01:13.903265 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 9 00:01:13.903273 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 9 00:01:13.903283 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 9 00:01:13.903291 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 00:01:13.903298 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 00:01:13.903314 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 00:01:13.903322 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 00:01:13.903333 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 00:01:13.903340 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 00:01:13.903348 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 00:01:13.903356 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 00:01:13.903364 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 00:01:13.903372 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 00:01:13.903380 kernel: TSC deadline timer available Sep 9 00:01:13.903387 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 9 00:01:13.903395 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 00:01:13.903405 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 00:01:13.903413 kernel: kvm-guest: setup PV sched yield Sep 9 00:01:13.903421 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 9 00:01:13.903428 kernel: Booting paravirtualized kernel on KVM Sep 9 00:01:13.903437 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 00:01:13.903444 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 00:01:13.903452 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 9 00:01:13.903460 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 9 00:01:13.903468 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 00:01:13.903477 kernel: kvm-guest: PV spinlocks enabled Sep 9 00:01:13.903485 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 00:01:13.903494 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=614c4ef85422d1b24559f161a4ad89cb626bb862dd1c761ed2d77c8a0665a1ae Sep 9 00:01:13.903503 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:01:13.903510 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:01:13.903518 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:01:13.903526 kernel: Fallback order for Node 0: 0 Sep 9 00:01:13.903534 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Sep 9 00:01:13.903542 kernel: Policy zone: DMA32 Sep 9 00:01:13.903552 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:01:13.903560 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43504K init, 1572K bss, 177824K reserved, 0K cma-reserved) Sep 9 00:01:13.903568 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:01:13.903582 kernel: ftrace: allocating 37943 entries in 149 pages Sep 9 00:01:13.903590 kernel: ftrace: allocated 149 pages with 4 groups Sep 9 00:01:13.903598 kernel: Dynamic Preempt: voluntary Sep 9 00:01:13.903606 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:01:13.903618 kernel: rcu: RCU event tracing is enabled. Sep 9 00:01:13.903628 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:01:13.903636 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:01:13.903644 kernel: Rude variant of Tasks RCU enabled. Sep 9 00:01:13.903652 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:01:13.903660 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:01:13.903668 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:01:13.903675 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 00:01:13.903683 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 00:01:13.903691 kernel: Console: colour dummy device 80x25 Sep 9 00:01:13.903699 kernel: printk: console [ttyS0] enabled Sep 9 00:01:13.903709 kernel: ACPI: Core revision 20230628 Sep 9 00:01:13.903717 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 00:01:13.903725 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 00:01:13.903733 kernel: x2apic enabled Sep 9 00:01:13.903740 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 00:01:13.903748 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 00:01:13.903756 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 00:01:13.903764 kernel: kvm-guest: setup PV IPIs Sep 9 00:01:13.903772 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 00:01:13.903782 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 9 00:01:13.903790 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 9 00:01:13.903797 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 00:01:13.903805 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 00:01:13.903813 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 00:01:13.903821 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 00:01:13.903828 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 00:01:13.903841 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 00:01:13.903849 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 00:01:13.903859 kernel: active return thunk: retbleed_return_thunk Sep 9 00:01:13.903867 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 00:01:13.903875 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 00:01:13.903883 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 00:01:13.903891 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 00:01:13.903901 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 00:01:13.903909 kernel: active return thunk: srso_return_thunk Sep 9 00:01:13.903917 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 00:01:13.903927 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 00:01:13.903935 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 00:01:13.903943 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 00:01:13.903950 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 00:01:13.903958 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 00:01:13.903966 kernel: Freeing SMP alternatives memory: 32K Sep 9 00:01:13.903983 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:01:13.903991 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 9 00:01:13.903999 kernel: landlock: Up and running. Sep 9 00:01:13.904009 kernel: SELinux: Initializing. Sep 9 00:01:13.904017 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:01:13.904025 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:01:13.904033 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 00:01:13.904041 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:01:13.904049 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:01:13.904057 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:01:13.904065 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 00:01:13.904072 kernel: ... version: 0 Sep 9 00:01:13.904082 kernel: ... bit width: 48 Sep 9 00:01:13.904090 kernel: ... generic registers: 6 Sep 9 00:01:13.904098 kernel: ... value mask: 0000ffffffffffff Sep 9 00:01:13.904105 kernel: ... max period: 00007fffffffffff Sep 9 00:01:13.904113 kernel: ... fixed-purpose events: 0 Sep 9 00:01:13.904121 kernel: ... event mask: 000000000000003f Sep 9 00:01:13.904128 kernel: signal: max sigframe size: 1776 Sep 9 00:01:13.904136 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:01:13.904144 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:01:13.904153 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:01:13.904161 kernel: smpboot: x86: Booting SMP configuration: Sep 9 00:01:13.904169 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 00:01:13.904176 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:01:13.904184 kernel: smpboot: Max logical packages: 1 Sep 9 00:01:13.904192 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 9 00:01:13.904199 kernel: devtmpfs: initialized Sep 9 00:01:13.904207 kernel: x86/mm: Memory block size: 128MB Sep 9 00:01:13.904215 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 9 00:01:13.904225 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 9 00:01:13.904233 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 9 00:01:13.904240 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 9 00:01:13.904248 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Sep 9 00:01:13.904256 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 9 00:01:13.904264 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:01:13.904272 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:01:13.904279 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:01:13.904287 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:01:13.904297 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:01:13.904305 kernel: audit: type=2000 audit(1757376073.160:1): state=initialized audit_enabled=0 res=1 Sep 9 00:01:13.904313 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:01:13.904320 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 00:01:13.904328 kernel: cpuidle: using governor menu Sep 9 00:01:13.904336 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:01:13.904343 kernel: dca service started, version 1.12.1 Sep 9 00:01:13.904351 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 9 00:01:13.904359 kernel: PCI: Using configuration type 1 for base access Sep 9 00:01:13.904369 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 00:01:13.904377 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:01:13.904385 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:01:13.904393 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:01:13.904400 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:01:13.904408 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:01:13.904416 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:01:13.904424 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:01:13.904431 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:01:13.904441 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 9 00:01:13.904449 kernel: ACPI: Interpreter enabled Sep 9 00:01:13.904457 kernel: ACPI: PM: (supports S0 S3 S5) Sep 9 00:01:13.904464 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 00:01:13.904472 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 00:01:13.904480 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 00:01:13.904488 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 00:01:13.904495 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:01:13.904685 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:01:13.904831 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 00:01:13.904960 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 00:01:13.904981 kernel: PCI host bridge to bus 0000:00 Sep 9 00:01:13.905115 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 00:01:13.905230 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 00:01:13.905344 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 00:01:13.905462 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 9 00:01:13.905582 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 9 00:01:13.905697 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 9 00:01:13.905810 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:01:13.905952 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 9 00:01:13.906114 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 9 00:01:13.906248 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 9 00:01:13.906371 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 9 00:01:13.906511 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 9 00:01:13.906647 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 9 00:01:13.906773 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 00:01:13.906907 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 9 00:01:13.907052 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 9 00:01:13.907181 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 9 00:01:13.907305 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Sep 9 00:01:13.907464 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 9 00:01:13.907601 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 9 00:01:13.907728 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 9 00:01:13.907852 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Sep 9 00:01:13.907999 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 9 00:01:13.908133 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 9 00:01:13.908257 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 9 00:01:13.908385 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 9 00:01:13.908514 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 9 00:01:13.908656 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 9 00:01:13.908782 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 00:01:13.908913 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 9 00:01:13.909067 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 9 00:01:13.909192 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 9 00:01:13.909325 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 9 00:01:13.909454 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 9 00:01:13.909465 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 00:01:13.909473 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 00:01:13.909480 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 00:01:13.909492 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 00:01:13.909500 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 00:01:13.909508 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 00:01:13.909515 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 00:01:13.909523 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 00:01:13.909531 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 00:01:13.909539 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 00:01:13.909546 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 00:01:13.909554 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 00:01:13.909564 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 00:01:13.909581 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 00:01:13.909589 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 00:01:13.909597 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 00:01:13.909604 kernel: iommu: Default domain type: Translated Sep 9 00:01:13.909613 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 00:01:13.909620 kernel: efivars: Registered efivars operations Sep 9 00:01:13.909628 kernel: PCI: Using ACPI for IRQ routing Sep 9 00:01:13.909636 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 00:01:13.909647 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 9 00:01:13.909654 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 9 00:01:13.909662 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Sep 9 00:01:13.909670 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Sep 9 00:01:13.909677 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 9 00:01:13.909685 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 9 00:01:13.909693 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Sep 9 00:01:13.909700 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 9 00:01:13.909825 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 00:01:13.909953 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 00:01:13.910122 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 00:01:13.910134 kernel: vgaarb: loaded Sep 9 00:01:13.910142 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 00:01:13.910150 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 00:01:13.910158 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 00:01:13.910166 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:01:13.910174 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:01:13.910186 kernel: pnp: PnP ACPI init Sep 9 00:01:13.910334 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 9 00:01:13.910347 kernel: pnp: PnP ACPI: found 6 devices Sep 9 00:01:13.910356 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 00:01:13.910364 kernel: NET: Registered PF_INET protocol family Sep 9 00:01:13.910391 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:01:13.910401 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:01:13.910409 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:01:13.910420 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:01:13.910428 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 00:01:13.910436 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:01:13.910444 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:01:13.910452 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:01:13.910460 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:01:13.910468 kernel: NET: Registered PF_XDP protocol family Sep 9 00:01:13.910601 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 9 00:01:13.910728 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 9 00:01:13.910866 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 00:01:13.911005 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 00:01:13.911120 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 00:01:13.911232 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 9 00:01:13.911343 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 9 00:01:13.911454 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 9 00:01:13.911465 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:01:13.911473 kernel: Initialise system trusted keyrings Sep 9 00:01:13.911485 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:01:13.911493 kernel: Key type asymmetric registered Sep 9 00:01:13.911501 kernel: Asymmetric key parser 'x509' registered Sep 9 00:01:13.911509 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 9 00:01:13.911518 kernel: io scheduler mq-deadline registered Sep 9 00:01:13.911526 kernel: io scheduler kyber registered Sep 9 00:01:13.911534 kernel: io scheduler bfq registered Sep 9 00:01:13.911542 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 00:01:13.911550 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 00:01:13.911561 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 00:01:13.911579 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 00:01:13.911587 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:01:13.911596 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 00:01:13.911605 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 00:01:13.911613 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 00:01:13.911623 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 00:01:13.911777 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 00:01:13.911790 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 00:01:13.911906 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 00:01:13.912092 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T00:01:13 UTC (1757376073) Sep 9 00:01:13.912210 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 9 00:01:13.912220 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 00:01:13.912232 kernel: efifb: probing for efifb Sep 9 00:01:13.912241 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 9 00:01:13.912249 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 9 00:01:13.912257 kernel: efifb: scrolling: redraw Sep 9 00:01:13.912265 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 9 00:01:13.912273 kernel: Console: switching to colour frame buffer device 160x50 Sep 9 00:01:13.912284 kernel: fb0: EFI VGA frame buffer device Sep 9 00:01:13.912293 kernel: pstore: Using crash dump compression: deflate Sep 9 00:01:13.912301 kernel: pstore: Registered efi_pstore as persistent store backend Sep 9 00:01:13.912309 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:01:13.912320 kernel: Segment Routing with IPv6 Sep 9 00:01:13.912328 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:01:13.912336 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:01:13.912344 kernel: Key type dns_resolver registered Sep 9 00:01:13.912352 kernel: IPI shorthand broadcast: enabled Sep 9 00:01:13.912360 kernel: sched_clock: Marking stable (1394002550, 178472658)->(1611208045, -38732837) Sep 9 00:01:13.912368 kernel: registered taskstats version 1 Sep 9 00:01:13.912379 kernel: Loading compiled-in X.509 certificates Sep 9 00:01:13.912388 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: c16a276a56169aed770943c7e14b6e7e5f4f7133' Sep 9 00:01:13.912399 kernel: Key type .fscrypt registered Sep 9 00:01:13.912406 kernel: Key type fscrypt-provisioning registered Sep 9 00:01:13.912415 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:01:13.912423 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:01:13.912431 kernel: ima: No architecture policies found Sep 9 00:01:13.912439 kernel: clk: Disabling unused clocks Sep 9 00:01:13.912447 kernel: Freeing unused kernel image (initmem) memory: 43504K Sep 9 00:01:13.912456 kernel: Write protecting the kernel read-only data: 38912k Sep 9 00:01:13.912466 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 9 00:01:13.912474 kernel: Run /init as init process Sep 9 00:01:13.912482 kernel: with arguments: Sep 9 00:01:13.912490 kernel: /init Sep 9 00:01:13.912498 kernel: with environment: Sep 9 00:01:13.912506 kernel: HOME=/ Sep 9 00:01:13.912513 kernel: TERM=linux Sep 9 00:01:13.912521 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:01:13.912530 systemd[1]: Successfully made /usr/ read-only. Sep 9 00:01:13.912544 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:01:13.912553 systemd[1]: Detected virtualization kvm. Sep 9 00:01:13.912561 systemd[1]: Detected architecture x86-64. Sep 9 00:01:13.912570 systemd[1]: Running in initrd. Sep 9 00:01:13.912585 systemd[1]: No hostname configured, using default hostname. Sep 9 00:01:13.912594 systemd[1]: Hostname set to . Sep 9 00:01:13.912602 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:01:13.912611 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:01:13.912622 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:01:13.912631 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:01:13.912641 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:01:13.912649 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:01:13.912658 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:01:13.912668 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:01:13.912680 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:01:13.912692 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:01:13.912700 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:01:13.912709 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:01:13.912718 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:01:13.912726 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:01:13.912735 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:01:13.912743 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:01:13.912752 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:01:13.912763 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:01:13.912772 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:01:13.912781 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 00:01:13.912789 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:01:13.912812 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:01:13.912821 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:01:13.912830 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:01:13.912838 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:01:13.912849 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:01:13.912859 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:01:13.912867 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:01:13.912876 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:01:13.912885 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:01:13.912893 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:01:13.912902 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:01:13.912911 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:01:13.912922 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:01:13.912952 systemd-journald[195]: Collecting audit messages is disabled. Sep 9 00:01:13.912994 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:01:13.913004 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:01:13.913012 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:01:13.913021 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:01:13.913030 systemd-journald[195]: Journal started Sep 9 00:01:13.913060 systemd-journald[195]: Runtime Journal (/run/log/journal/0d312e4a830746a38051a9fd497b1061) is 6M, max 48.2M, 42.2M free. Sep 9 00:01:13.896224 systemd-modules-load[196]: Inserted module 'overlay' Sep 9 00:01:13.917861 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:01:13.917887 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:01:13.923542 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:01:13.926687 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:01:13.928766 systemd-modules-load[196]: Inserted module 'br_netfilter' Sep 9 00:01:13.929687 kernel: Bridge firewalling registered Sep 9 00:01:13.929858 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:01:13.931293 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:01:13.934460 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:01:13.936306 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:01:13.939500 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:01:13.940814 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:01:13.947450 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:01:13.949722 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:01:13.959635 dracut-cmdline[228]: dracut-dracut-053 Sep 9 00:01:13.963822 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=614c4ef85422d1b24559f161a4ad89cb626bb862dd1c761ed2d77c8a0665a1ae Sep 9 00:01:14.004222 systemd-resolved[234]: Positive Trust Anchors: Sep 9 00:01:14.004245 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:01:14.004285 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:01:14.007841 systemd-resolved[234]: Defaulting to hostname 'linux'. Sep 9 00:01:14.009293 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:01:14.014005 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:01:14.066032 kernel: SCSI subsystem initialized Sep 9 00:01:14.075011 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:01:14.086026 kernel: iscsi: registered transport (tcp) Sep 9 00:01:14.107010 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:01:14.107062 kernel: QLogic iSCSI HBA Driver Sep 9 00:01:14.166133 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:01:14.181101 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:01:14.210382 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:01:14.210452 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:01:14.211375 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 9 00:01:14.252033 kernel: raid6: avx2x4 gen() 29528 MB/s Sep 9 00:01:14.269022 kernel: raid6: avx2x2 gen() 31098 MB/s Sep 9 00:01:14.286073 kernel: raid6: avx2x1 gen() 25727 MB/s Sep 9 00:01:14.286146 kernel: raid6: using algorithm avx2x2 gen() 31098 MB/s Sep 9 00:01:14.304072 kernel: raid6: .... xor() 19813 MB/s, rmw enabled Sep 9 00:01:14.304155 kernel: raid6: using avx2x2 recovery algorithm Sep 9 00:01:14.325013 kernel: xor: automatically using best checksumming function avx Sep 9 00:01:14.472024 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:01:14.486248 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:01:14.498187 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:01:14.515839 systemd-udevd[415]: Using default interface naming scheme 'v255'. Sep 9 00:01:14.521364 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:01:14.532150 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:01:14.549345 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Sep 9 00:01:14.581941 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:01:14.594147 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:01:14.663492 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:01:14.673192 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:01:14.685809 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:01:14.689353 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:01:14.691808 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:01:14.692227 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:01:14.699134 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:01:14.708719 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:01:14.713096 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 00:01:14.715068 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:01:14.717254 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:01:14.718385 kernel: GPT:9289727 != 19775487 Sep 9 00:01:14.718403 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:01:14.718414 kernel: GPT:9289727 != 19775487 Sep 9 00:01:14.719827 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:01:14.719854 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:01:14.719865 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:01:14.749922 kernel: AVX2 version of gcm_enc/dec engaged. Sep 9 00:01:14.750002 kernel: AES CTR mode by8 optimization enabled Sep 9 00:01:14.751739 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:01:14.753307 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:01:14.756173 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:01:14.758841 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:01:14.759087 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:01:14.763003 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:01:14.770053 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (466) Sep 9 00:01:14.774032 kernel: libata version 3.00 loaded. Sep 9 00:01:14.774603 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:01:14.786186 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 00:01:14.786404 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 00:01:14.787652 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 9 00:01:14.787820 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 00:01:14.789616 kernel: BTRFS: device fsid 49c9ae6f-f48b-4b7d-8773-9ddfd8ce7dbf devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (470) Sep 9 00:01:14.794141 kernel: scsi host0: ahci Sep 9 00:01:14.794339 kernel: scsi host1: ahci Sep 9 00:01:14.794506 kernel: scsi host2: ahci Sep 9 00:01:14.795232 kernel: scsi host3: ahci Sep 9 00:01:14.795420 kernel: scsi host4: ahci Sep 9 00:01:14.796256 kernel: scsi host5: ahci Sep 9 00:01:14.797203 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 9 00:01:14.797223 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 9 00:01:14.797370 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:01:14.802727 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 9 00:01:14.802749 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 9 00:01:14.802760 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 9 00:01:14.802771 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 9 00:01:14.812476 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 00:01:14.824763 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 00:01:14.848225 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:01:14.857430 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 00:01:14.859872 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 00:01:14.875121 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:01:14.878078 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:01:14.884959 disk-uuid[560]: Primary Header is updated. Sep 9 00:01:14.884959 disk-uuid[560]: Secondary Entries is updated. Sep 9 00:01:14.884959 disk-uuid[560]: Secondary Header is updated. Sep 9 00:01:14.887994 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:01:14.892996 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:01:14.905373 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:01:15.113015 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 00:01:15.113097 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 00:01:15.114010 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 00:01:15.114999 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 00:01:15.115997 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 00:01:15.116020 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 00:01:15.117003 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 00:01:15.118337 kernel: ata3.00: applying bridge limits Sep 9 00:01:15.118398 kernel: ata3.00: configured for UDMA/100 Sep 9 00:01:15.119002 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 00:01:15.177013 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 00:01:15.177330 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 00:01:15.190999 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 00:01:15.894027 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:01:15.894217 disk-uuid[561]: The operation has completed successfully. Sep 9 00:01:15.927275 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:01:15.927398 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:01:15.976241 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:01:15.982171 sh[592]: Success Sep 9 00:01:15.996999 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 9 00:01:16.034568 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:01:16.047655 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:01:16.050155 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:01:16.061687 kernel: BTRFS info (device dm-0): first mount of filesystem 49c9ae6f-f48b-4b7d-8773-9ddfd8ce7dbf Sep 9 00:01:16.061757 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:01:16.061769 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 9 00:01:16.062670 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:01:16.063412 kernel: BTRFS info (device dm-0): using free space tree Sep 9 00:01:16.068641 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:01:16.070157 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 00:01:16.082203 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:01:16.084106 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:01:16.102835 kernel: BTRFS info (device vda6): first mount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 9 00:01:16.102900 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:01:16.102916 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:01:16.107010 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:01:16.111008 kernel: BTRFS info (device vda6): last unmount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 9 00:01:16.117458 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:01:16.123170 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:01:16.265528 ignition[681]: Ignition 2.20.0 Sep 9 00:01:16.265539 ignition[681]: Stage: fetch-offline Sep 9 00:01:16.265590 ignition[681]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:01:16.265599 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:01:16.268357 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:01:16.265706 ignition[681]: parsed url from cmdline: "" Sep 9 00:01:16.265710 ignition[681]: no config URL provided Sep 9 00:01:16.265715 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:01:16.265724 ignition[681]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:01:16.265749 ignition[681]: op(1): [started] loading QEMU firmware config module Sep 9 00:01:16.265755 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:01:16.280256 ignition[681]: op(1): [finished] loading QEMU firmware config module Sep 9 00:01:16.282000 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:01:16.320062 ignition[681]: parsing config with SHA512: 4a0acced443a4b7df299e7471e36d83f1ce433586e75f4526abd0d99856d5ea26b64dc39dd225f5becf6f49170d3021f1acf6813d1fa81867fdd8c9942527fcd Sep 9 00:01:16.323849 unknown[681]: fetched base config from "system" Sep 9 00:01:16.323861 unknown[681]: fetched user config from "qemu" Sep 9 00:01:16.330378 systemd-networkd[778]: lo: Link UP Sep 9 00:01:16.330386 systemd-networkd[778]: lo: Gained carrier Sep 9 00:01:16.332079 systemd-networkd[778]: Enumeration completed Sep 9 00:01:16.332446 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:01:16.332450 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:01:16.333418 systemd-networkd[778]: eth0: Link UP Sep 9 00:01:16.333422 systemd-networkd[778]: eth0: Gained carrier Sep 9 00:01:16.333429 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:01:16.334055 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:01:16.334771 systemd[1]: Reached target network.target - Network. Sep 9 00:01:16.366605 ignition[681]: fetch-offline: fetch-offline passed Sep 9 00:01:16.366814 ignition[681]: Ignition finished successfully Sep 9 00:01:16.371767 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:01:16.374569 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:01:16.377028 systemd-networkd[778]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:01:16.399144 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:01:16.422119 ignition[782]: Ignition 2.20.0 Sep 9 00:01:16.422130 ignition[782]: Stage: kargs Sep 9 00:01:16.422280 ignition[782]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:01:16.422290 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:01:16.423078 ignition[782]: kargs: kargs passed Sep 9 00:01:16.423121 ignition[782]: Ignition finished successfully Sep 9 00:01:16.429684 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:01:16.443086 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:01:16.454442 ignition[791]: Ignition 2.20.0 Sep 9 00:01:16.454450 ignition[791]: Stage: disks Sep 9 00:01:16.454609 ignition[791]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:01:16.454619 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:01:16.455396 ignition[791]: disks: disks passed Sep 9 00:01:16.455436 ignition[791]: Ignition finished successfully Sep 9 00:01:16.460822 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:01:16.461439 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:01:16.462911 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:01:16.464901 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:01:16.467265 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:01:16.469053 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:01:16.478094 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:01:16.493107 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 9 00:01:16.499693 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:01:16.511095 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:01:16.631000 kernel: EXT4-fs (vda9): mounted filesystem 4436772e-5166-41e3-9cb5-50bbb91cbcf6 r/w with ordered data mode. Quota mode: none. Sep 9 00:01:16.631296 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:01:16.632700 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:01:16.645045 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:01:16.646791 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:01:16.648194 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:01:16.656389 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (809) Sep 9 00:01:16.656412 kernel: BTRFS info (device vda6): first mount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 9 00:01:16.656424 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:01:16.656442 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:01:16.648235 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:01:16.659233 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:01:16.648258 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:01:16.660395 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:01:16.665588 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:01:16.679110 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:01:16.718631 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:01:16.723550 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:01:16.728598 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:01:16.733338 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:01:16.820343 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:01:16.832065 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:01:16.833822 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:01:16.841002 kernel: BTRFS info (device vda6): last unmount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 9 00:01:16.859961 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:01:16.878463 ignition[924]: INFO : Ignition 2.20.0 Sep 9 00:01:16.878463 ignition[924]: INFO : Stage: mount Sep 9 00:01:16.880255 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:01:16.880255 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:01:16.880255 ignition[924]: INFO : mount: mount passed Sep 9 00:01:16.880255 ignition[924]: INFO : Ignition finished successfully Sep 9 00:01:16.886026 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:01:16.893136 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:01:17.061159 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:01:17.074115 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:01:17.085650 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (937) Sep 9 00:01:17.085703 kernel: BTRFS info (device vda6): first mount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 9 00:01:17.085718 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:01:17.087130 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:01:17.089994 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:01:17.091151 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:01:17.118705 ignition[954]: INFO : Ignition 2.20.0 Sep 9 00:01:17.118705 ignition[954]: INFO : Stage: files Sep 9 00:01:17.120414 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:01:17.120414 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:01:17.120414 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:01:17.120414 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:01:17.120414 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:01:17.126848 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:01:17.126848 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:01:17.126848 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:01:17.126848 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 00:01:17.126848 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 9 00:01:17.123186 unknown[954]: wrote ssh authorized keys file for user: core Sep 9 00:01:17.324383 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:01:17.890242 systemd-networkd[778]: eth0: Gained IPv6LL Sep 9 00:01:17.917679 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 00:01:17.919855 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:01:17.922161 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:01:17.924072 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:01:17.926184 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:01:17.928178 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:01:17.930409 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:01:17.932371 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:01:17.934419 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:01:17.936699 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:01:17.938831 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:01:17.940822 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 00:01:17.943806 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 00:01:17.946243 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 00:01:17.948515 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 9 00:01:18.394025 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 00:01:19.161206 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 00:01:19.161206 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 00:01:19.165350 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:01:19.165350 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:01:19.165350 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 00:01:19.165350 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 9 00:01:19.165350 ignition[954]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:01:19.165350 ignition[954]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:01:19.165350 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 9 00:01:19.165350 ignition[954]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:01:19.187796 ignition[954]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:01:19.193319 ignition[954]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:01:19.195223 ignition[954]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:01:19.195223 ignition[954]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:01:19.198775 ignition[954]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:01:19.200503 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:01:19.202797 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:01:19.204687 ignition[954]: INFO : files: files passed Sep 9 00:01:19.205610 ignition[954]: INFO : Ignition finished successfully Sep 9 00:01:19.208405 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:01:19.217277 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:01:19.221160 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:01:19.224125 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:01:19.225297 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:01:19.234681 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 00:01:19.239695 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:01:19.239695 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:01:19.243106 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:01:19.246625 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:01:19.249244 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:01:19.256136 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:01:19.280959 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:01:19.281105 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:01:19.282027 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:01:19.286154 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:01:19.286464 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:01:19.287353 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:01:19.306249 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:01:19.314253 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:01:19.324732 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:01:19.325340 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:01:19.325748 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:01:19.326281 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:01:19.326415 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:01:19.333814 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:01:19.334394 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:01:19.334788 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:01:19.335334 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:01:19.335769 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:01:19.336503 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:01:19.336880 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:01:19.337449 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:01:19.337781 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:01:19.338323 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:01:19.338888 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:01:19.339045 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:01:19.339834 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:01:19.340408 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:01:19.340717 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:01:19.361117 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:01:19.361767 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:01:19.361963 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:01:19.367111 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:01:19.367284 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:01:19.367876 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:01:19.371805 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:01:19.373110 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:01:19.375864 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:01:19.377711 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:01:19.378314 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:01:19.378458 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:01:19.380015 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:01:19.380115 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:01:19.380515 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:01:19.380635 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:01:19.383369 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:01:19.383483 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:01:19.402186 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:01:19.404000 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:01:19.404138 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:01:19.406019 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:01:19.416464 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:01:19.416616 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:01:19.417926 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:01:19.418046 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:01:19.425366 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:01:19.446322 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:01:19.446939 ignition[1009]: INFO : Ignition 2.20.0 Sep 9 00:01:19.446939 ignition[1009]: INFO : Stage: umount Sep 9 00:01:19.447558 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:01:19.447558 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:01:19.448323 ignition[1009]: INFO : umount: umount passed Sep 9 00:01:19.448561 ignition[1009]: INFO : Ignition finished successfully Sep 9 00:01:19.453620 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:01:19.454650 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:01:19.457643 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:01:19.459955 systemd[1]: Stopped target network.target - Network. Sep 9 00:01:19.461982 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:01:19.462916 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:01:19.464902 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:01:19.464955 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:01:19.467864 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:01:19.467923 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:01:19.470692 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:01:19.471659 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:01:19.473812 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:01:19.476004 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:01:19.483631 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:01:19.483772 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:01:19.488202 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 00:01:19.488460 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:01:19.488580 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:01:19.492515 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 00:01:19.493271 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:01:19.493337 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:01:19.508297 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:01:19.510136 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:01:19.510213 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:01:19.512464 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:01:19.512516 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:01:19.514789 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:01:19.514849 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:01:19.516599 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:01:19.516650 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:01:19.518801 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:01:19.521813 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:01:19.521888 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:01:19.529003 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:01:19.529139 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:01:19.537807 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:01:19.538004 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:01:19.540097 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:01:19.540146 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:01:19.542066 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:01:19.542106 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:01:19.543919 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:01:19.543968 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:01:19.566756 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:01:19.566838 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:01:19.568618 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:01:19.568670 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:01:19.579173 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:01:19.579631 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:01:19.579698 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:01:19.583490 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:01:19.583544 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:01:19.587594 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 00:01:19.587667 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:01:19.598139 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:01:19.598269 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:01:19.888786 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:01:19.888988 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:01:19.891034 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:01:19.892664 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:01:19.892725 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:01:19.908146 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:01:19.915309 systemd[1]: Switching root. Sep 9 00:01:19.952437 systemd-journald[195]: Journal stopped Sep 9 00:01:21.576359 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Sep 9 00:01:21.576426 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:01:21.576448 kernel: SELinux: policy capability open_perms=1 Sep 9 00:01:21.576461 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:01:21.576473 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:01:21.576489 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:01:21.576501 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:01:21.576513 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:01:21.576525 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:01:21.576537 kernel: audit: type=1403 audit(1757376080.641:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:01:21.576554 systemd[1]: Successfully loaded SELinux policy in 39.455ms. Sep 9 00:01:21.576575 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.695ms. Sep 9 00:01:21.576589 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:01:21.576603 systemd[1]: Detected virtualization kvm. Sep 9 00:01:21.576620 systemd[1]: Detected architecture x86-64. Sep 9 00:01:21.576632 systemd[1]: Detected first boot. Sep 9 00:01:21.576645 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:01:21.576658 zram_generator::config[1056]: No configuration found. Sep 9 00:01:21.576672 kernel: Guest personality initialized and is inactive Sep 9 00:01:21.576690 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 00:01:21.576702 kernel: Initialized host personality Sep 9 00:01:21.576714 kernel: NET: Registered PF_VSOCK protocol family Sep 9 00:01:21.576726 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:01:21.576739 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 00:01:21.576752 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:01:21.576768 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 00:01:21.576781 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:01:21.576793 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:01:21.576808 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:01:21.576821 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:01:21.576833 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:01:21.576846 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:01:21.576859 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:01:21.576871 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:01:21.576883 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:01:21.576895 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:01:21.576911 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:01:21.576924 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:01:21.576937 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:01:21.576949 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:01:21.576962 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:01:21.576990 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 00:01:21.577003 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:01:21.577015 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 00:01:21.577030 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 00:01:21.577042 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 00:01:21.577054 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:01:21.577067 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:01:21.577085 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:01:21.577097 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:01:21.577109 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:01:21.577122 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:01:21.577135 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:01:21.577150 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 00:01:21.577162 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:01:21.577175 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:01:21.577187 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:01:21.577199 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:01:21.577212 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:01:21.577224 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:01:21.577236 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:01:21.577249 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:01:21.577263 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:01:21.577276 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:01:21.577288 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:01:21.577300 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:01:21.577313 systemd[1]: Reached target machines.target - Containers. Sep 9 00:01:21.577325 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:01:21.577338 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:01:21.577351 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:01:21.577375 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:01:21.577388 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:01:21.577401 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:01:21.577414 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:01:21.577426 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:01:21.577438 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:01:21.577451 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:01:21.577463 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:01:21.577476 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 00:01:21.577491 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:01:21.577503 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:01:21.577516 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:01:21.577528 kernel: fuse: init (API version 7.39) Sep 9 00:01:21.577540 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:01:21.577552 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:01:21.577564 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:01:21.577577 kernel: loop: module loaded Sep 9 00:01:21.577591 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:01:21.577603 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 00:01:21.577615 kernel: ACPI: bus type drm_connector registered Sep 9 00:01:21.577628 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:01:21.577640 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:01:21.577655 systemd[1]: Stopped verity-setup.service. Sep 9 00:01:21.577667 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:01:21.577681 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:01:21.577693 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:01:21.577706 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:01:21.577735 systemd-journald[1134]: Collecting audit messages is disabled. Sep 9 00:01:21.577757 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:01:21.577772 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:01:21.577785 systemd-journald[1134]: Journal started Sep 9 00:01:21.577808 systemd-journald[1134]: Runtime Journal (/run/log/journal/0d312e4a830746a38051a9fd497b1061) is 6M, max 48.2M, 42.2M free. Sep 9 00:01:21.329578 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:01:21.345093 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 00:01:21.345628 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:01:21.581113 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:01:21.581871 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:01:21.583176 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:01:21.584722 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:01:21.586303 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:01:21.586538 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:01:21.588091 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:01:21.588300 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:01:21.589690 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:01:21.589903 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:01:21.591362 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:01:21.591580 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:01:21.593071 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:01:21.593275 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:01:21.594800 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:01:21.595025 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:01:21.596415 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:01:21.597930 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:01:21.599471 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:01:21.601226 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 00:01:21.614654 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:01:21.621079 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:01:21.625945 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:01:21.627146 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:01:21.627186 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:01:21.629552 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 00:01:21.641140 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:01:21.643415 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:01:21.644608 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:01:21.645924 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:01:21.648750 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:01:21.651968 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:01:21.653292 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:01:21.654607 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:01:21.659157 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:01:21.661546 systemd-journald[1134]: Time spent on flushing to /var/log/journal/0d312e4a830746a38051a9fd497b1061 is 19.408ms for 1050 entries. Sep 9 00:01:21.661546 systemd-journald[1134]: System Journal (/var/log/journal/0d312e4a830746a38051a9fd497b1061) is 8M, max 195.6M, 187.6M free. Sep 9 00:01:21.710436 systemd-journald[1134]: Received client request to flush runtime journal. Sep 9 00:01:21.710474 kernel: loop0: detected capacity change from 0 to 224512 Sep 9 00:01:21.663126 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:01:21.666430 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:01:21.670645 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:01:21.674478 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:01:21.676060 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:01:21.677751 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:01:21.696180 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 9 00:01:21.697685 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:01:21.699775 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:01:21.705149 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 00:01:21.711481 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:01:21.713681 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:01:21.723662 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 9 00:01:21.761944 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:01:21.777011 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:01:21.775915 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:01:21.802206 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Sep 9 00:01:21.802224 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Sep 9 00:01:21.808463 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:01:21.812001 kernel: loop1: detected capacity change from 0 to 147912 Sep 9 00:01:21.835448 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 00:01:21.871000 kernel: loop2: detected capacity change from 0 to 138176 Sep 9 00:01:21.909162 kernel: loop3: detected capacity change from 0 to 224512 Sep 9 00:01:21.919998 kernel: loop4: detected capacity change from 0 to 147912 Sep 9 00:01:21.935000 kernel: loop5: detected capacity change from 0 to 138176 Sep 9 00:01:21.951198 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 00:01:21.951818 (sd-merge)[1200]: Merged extensions into '/usr'. Sep 9 00:01:21.957540 systemd[1]: Reload requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:01:21.957559 systemd[1]: Reloading... Sep 9 00:01:22.032009 zram_generator::config[1229]: No configuration found. Sep 9 00:01:22.041624 ldconfig[1171]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:01:22.153752 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:01:22.222244 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:01:22.222439 systemd[1]: Reloading finished in 264 ms. Sep 9 00:01:22.241346 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:01:22.243155 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:01:22.265083 systemd[1]: Starting ensure-sysext.service... Sep 9 00:01:22.267420 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:01:22.282569 systemd[1]: Reload requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:01:22.282591 systemd[1]: Reloading... Sep 9 00:01:22.291474 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:01:22.292197 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:01:22.293165 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:01:22.293467 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Sep 9 00:01:22.293754 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Sep 9 00:01:22.298310 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:01:22.298471 systemd-tmpfiles[1266]: Skipping /boot Sep 9 00:01:22.314040 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:01:22.314055 systemd-tmpfiles[1266]: Skipping /boot Sep 9 00:01:22.341007 zram_generator::config[1295]: No configuration found. Sep 9 00:01:22.466571 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:01:22.533524 systemd[1]: Reloading finished in 250 ms. Sep 9 00:01:22.550535 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:01:22.570060 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:01:22.593492 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:01:22.596766 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:01:22.599276 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:01:22.604360 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:01:22.608824 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:01:22.614767 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:01:22.619751 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:01:22.619933 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:01:22.622200 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:01:22.628314 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:01:22.634049 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:01:22.635216 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:01:22.635328 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:01:22.638469 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:01:22.640202 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:01:22.641849 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:01:22.642720 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:01:22.645300 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:01:22.647216 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:01:22.656365 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:01:22.656688 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:01:22.658753 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:01:22.663842 systemd-udevd[1344]: Using default interface naming scheme 'v255'. Sep 9 00:01:22.674091 augenrules[1367]: No rules Sep 9 00:01:22.673251 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:01:22.673549 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:01:22.686457 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:01:22.691289 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:01:22.695401 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:01:22.698200 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:01:22.698520 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:01:22.700275 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:01:22.701466 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:01:22.702965 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:01:22.705246 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:01:22.705532 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:01:22.707501 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:01:22.709403 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:01:22.711171 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:01:22.713757 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:01:22.714006 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:01:22.715727 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:01:22.715948 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:01:22.717690 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:01:22.717914 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:01:22.723215 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:01:22.754425 systemd[1]: Finished ensure-sysext.service. Sep 9 00:01:22.761244 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:01:22.769244 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:01:22.770496 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:01:22.774268 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:01:22.781380 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:01:22.802346 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:01:22.807021 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:01:22.808703 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:01:22.808765 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:01:22.816659 augenrules[1408]: /sbin/augenrules: No change Sep 9 00:01:22.822154 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:01:22.827159 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:01:22.830212 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:01:22.830261 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:01:22.831457 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:01:22.831758 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:01:22.833008 systemd-resolved[1341]: Positive Trust Anchors: Sep 9 00:01:22.833287 systemd-resolved[1341]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:01:22.833326 augenrules[1432]: No rules Sep 9 00:01:22.833588 systemd-resolved[1341]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:01:22.833739 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:01:22.834126 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:01:22.836288 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:01:22.836608 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:01:22.838483 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:01:22.838807 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:01:22.838915 systemd-resolved[1341]: Defaulting to hostname 'linux'. Sep 9 00:01:22.841083 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:01:22.841371 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:01:22.900027 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:01:22.903716 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 00:01:22.906738 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:01:22.916996 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1388) Sep 9 00:01:22.909709 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:01:22.909776 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:01:22.975197 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 9 00:01:22.979021 kernel: ACPI: button: Power Button [PWRF] Sep 9 00:01:22.998955 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:01:23.052026 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 9 00:01:23.052366 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 00:01:23.071407 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 9 00:01:23.071718 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 00:01:23.076459 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:01:23.084006 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 9 00:01:23.108998 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:01:23.109451 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:01:23.110399 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:01:23.125403 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:01:23.145136 systemd-networkd[1423]: lo: Link UP Sep 9 00:01:23.145150 systemd-networkd[1423]: lo: Gained carrier Sep 9 00:01:23.150376 systemd-networkd[1423]: Enumeration completed Sep 9 00:01:23.150466 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:01:23.151683 systemd[1]: Reached target network.target - Network. Sep 9 00:01:23.155212 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:01:23.155223 systemd-networkd[1423]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:01:23.157210 systemd-networkd[1423]: eth0: Link UP Sep 9 00:01:23.157222 systemd-networkd[1423]: eth0: Gained carrier Sep 9 00:01:23.157238 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:01:23.172347 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 00:01:23.178212 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:01:23.186060 systemd-networkd[1423]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:01:23.187421 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. Sep 9 00:01:24.256016 systemd-timesyncd[1430]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:01:24.256087 systemd-timesyncd[1430]: Initial clock synchronization to Tue 2025-09-09 00:01:24.255939 UTC. Sep 9 00:01:24.256457 systemd-resolved[1341]: Clock change detected. Flushing caches. Sep 9 00:01:24.259141 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:01:24.259844 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:01:24.264173 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 00:01:24.264594 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:01:24.268646 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 00:01:24.278572 kernel: kvm_amd: TSC scaling supported Sep 9 00:01:24.278612 kernel: kvm_amd: Nested Virtualization enabled Sep 9 00:01:24.278626 kernel: kvm_amd: Nested Paging enabled Sep 9 00:01:24.279981 kernel: kvm_amd: LBR virtualization supported Sep 9 00:01:24.280008 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 00:01:24.280085 kernel: kvm_amd: Virtual GIF supported Sep 9 00:01:24.295238 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:01:24.316059 kernel: EDAC MC: Ver: 3.0.0 Sep 9 00:01:24.351496 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 9 00:01:24.369285 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 9 00:01:24.370978 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:01:24.377926 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:01:24.417782 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 9 00:01:24.419547 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:01:24.420770 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:01:24.422083 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:01:24.423491 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:01:24.425179 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:01:24.426644 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:01:24.428074 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:01:24.429651 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:01:24.429679 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:01:24.430625 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:01:24.436984 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:01:24.440286 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:01:24.444206 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 00:01:24.445743 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 00:01:24.447024 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 00:01:24.451382 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:01:24.452904 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 00:01:24.455874 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 9 00:01:24.457569 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:01:24.458728 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:01:24.459730 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:01:24.460734 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:01:24.460761 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:01:24.462085 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:01:24.464432 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:01:24.467785 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:01:24.469135 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:01:24.473248 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:01:24.474247 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:01:24.476140 jq[1481]: false Sep 9 00:01:24.478403 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:01:24.484174 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:01:24.486466 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:01:24.491303 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:01:24.498606 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:01:24.500824 extend-filesystems[1482]: Found loop3 Sep 9 00:01:24.502198 extend-filesystems[1482]: Found loop4 Sep 9 00:01:24.502198 extend-filesystems[1482]: Found loop5 Sep 9 00:01:24.502198 extend-filesystems[1482]: Found sr0 Sep 9 00:01:24.502198 extend-filesystems[1482]: Found vda Sep 9 00:01:24.502198 extend-filesystems[1482]: Found vda1 Sep 9 00:01:24.502198 extend-filesystems[1482]: Found vda2 Sep 9 00:01:24.502198 extend-filesystems[1482]: Found vda3 Sep 9 00:01:24.502198 extend-filesystems[1482]: Found usr Sep 9 00:01:24.502198 extend-filesystems[1482]: Found vda4 Sep 9 00:01:24.502198 extend-filesystems[1482]: Found vda6 Sep 9 00:01:24.502198 extend-filesystems[1482]: Found vda7 Sep 9 00:01:24.502198 extend-filesystems[1482]: Found vda9 Sep 9 00:01:24.502198 extend-filesystems[1482]: Checking size of /dev/vda9 Sep 9 00:01:24.509025 dbus-daemon[1480]: [system] SELinux support is enabled Sep 9 00:01:24.505426 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:01:24.507488 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:01:24.511131 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:01:24.518184 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:01:24.520577 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:01:24.525013 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 9 00:01:24.526381 extend-filesystems[1482]: Resized partition /dev/vda9 Sep 9 00:01:24.531858 update_engine[1499]: I20250909 00:01:24.531759 1499 main.cc:92] Flatcar Update Engine starting Sep 9 00:01:24.532149 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:01:24.532501 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:01:24.532869 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:01:24.534126 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:01:24.536259 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:01:24.536525 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:01:24.538095 jq[1500]: true Sep 9 00:01:24.541100 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1388) Sep 9 00:01:24.547799 (ntainerd)[1510]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:01:24.549998 update_engine[1499]: I20250909 00:01:24.549944 1499 update_check_scheduler.cc:74] Next update check in 10m15s Sep 9 00:01:24.554672 extend-filesystems[1503]: resize2fs 1.47.1 (20-May-2024) Sep 9 00:01:24.555471 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:01:24.559314 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:01:24.565615 jq[1507]: true Sep 9 00:01:24.573408 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:01:24.559347 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:01:24.560935 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:01:24.560957 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:01:24.572001 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:01:24.586894 tar[1505]: linux-amd64/LICENSE Sep 9 00:01:24.586894 tar[1505]: linux-amd64/helm Sep 9 00:01:24.622652 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:01:24.744480 extend-filesystems[1503]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:01:24.744480 extend-filesystems[1503]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:01:24.744480 extend-filesystems[1503]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:01:24.749109 extend-filesystems[1482]: Resized filesystem in /dev/vda9 Sep 9 00:01:24.745445 locksmithd[1516]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:01:24.747184 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:01:24.747515 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:01:24.753530 systemd-logind[1493]: Watching system buttons on /dev/input/event1 (Power Button) Sep 9 00:01:24.753571 systemd-logind[1493]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 00:01:24.755394 systemd-logind[1493]: New seat seat0. Sep 9 00:01:24.756400 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:01:24.758416 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:01:24.760009 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:01:24.765804 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:01:25.353290 containerd[1510]: time="2025-09-09T00:01:25.353084310Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 9 00:01:25.388300 containerd[1510]: time="2025-09-09T00:01:25.388148189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:01:25.390346 containerd[1510]: time="2025-09-09T00:01:25.390234232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:01:25.390346 containerd[1510]: time="2025-09-09T00:01:25.390268026Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 00:01:25.390346 containerd[1510]: time="2025-09-09T00:01:25.390286851Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 00:01:25.390740 containerd[1510]: time="2025-09-09T00:01:25.390483390Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 9 00:01:25.390740 containerd[1510]: time="2025-09-09T00:01:25.390503197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 9 00:01:25.390740 containerd[1510]: time="2025-09-09T00:01:25.390574220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:01:25.390740 containerd[1510]: time="2025-09-09T00:01:25.390585812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:01:25.390925 containerd[1510]: time="2025-09-09T00:01:25.390842844Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:01:25.390925 containerd[1510]: time="2025-09-09T00:01:25.390860978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 00:01:25.390925 containerd[1510]: time="2025-09-09T00:01:25.390875235Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:01:25.390925 containerd[1510]: time="2025-09-09T00:01:25.390884422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 00:01:25.391011 containerd[1510]: time="2025-09-09T00:01:25.390981514Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:01:25.391373 containerd[1510]: time="2025-09-09T00:01:25.391255708Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:01:25.391449 containerd[1510]: time="2025-09-09T00:01:25.391425507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:01:25.391449 containerd[1510]: time="2025-09-09T00:01:25.391442068Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 00:01:25.391560 containerd[1510]: time="2025-09-09T00:01:25.391539631Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 00:01:25.391621 containerd[1510]: time="2025-09-09T00:01:25.391602469Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:01:25.398269 containerd[1510]: time="2025-09-09T00:01:25.398206977Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 00:01:25.398315 containerd[1510]: time="2025-09-09T00:01:25.398300913Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 00:01:25.398342 containerd[1510]: time="2025-09-09T00:01:25.398325259Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 9 00:01:25.398396 containerd[1510]: time="2025-09-09T00:01:25.398346479Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 9 00:01:25.398396 containerd[1510]: time="2025-09-09T00:01:25.398377948Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 00:01:25.398624 containerd[1510]: time="2025-09-09T00:01:25.398589214Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 00:01:25.398918 containerd[1510]: time="2025-09-09T00:01:25.398884267Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 00:01:25.399073 containerd[1510]: time="2025-09-09T00:01:25.399043175Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 9 00:01:25.399073 containerd[1510]: time="2025-09-09T00:01:25.399064555Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 9 00:01:25.399134 containerd[1510]: time="2025-09-09T00:01:25.399078441Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 9 00:01:25.399167 containerd[1510]: time="2025-09-09T00:01:25.399134306Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 00:01:25.399167 containerd[1510]: time="2025-09-09T00:01:25.399152891Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 00:01:25.399209 containerd[1510]: time="2025-09-09T00:01:25.399171115Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 00:01:25.399229 containerd[1510]: time="2025-09-09T00:01:25.399213495Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 00:01:25.399248 containerd[1510]: time="2025-09-09T00:01:25.399229475Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 00:01:25.399248 containerd[1510]: time="2025-09-09T00:01:25.399242429Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 00:01:25.399290 containerd[1510]: time="2025-09-09T00:01:25.399256616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 00:01:25.399290 containerd[1510]: time="2025-09-09T00:01:25.399270201Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 00:01:25.399326 containerd[1510]: time="2025-09-09T00:01:25.399300107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 00:01:25.399326 containerd[1510]: time="2025-09-09T00:01:25.399314665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 00:01:25.399377 containerd[1510]: time="2025-09-09T00:01:25.399326487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 00:01:25.399377 containerd[1510]: time="2025-09-09T00:01:25.399338419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 00:01:25.399377 containerd[1510]: time="2025-09-09T00:01:25.399361142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 00:01:25.399377 containerd[1510]: time="2025-09-09T00:01:25.399373966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 00:01:25.399453 containerd[1510]: time="2025-09-09T00:01:25.399385357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 00:01:25.399453 containerd[1510]: time="2025-09-09T00:01:25.399400816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 00:01:25.399453 containerd[1510]: time="2025-09-09T00:01:25.399416445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 9 00:01:25.399453 containerd[1510]: time="2025-09-09T00:01:25.399431594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 9 00:01:25.399453 containerd[1510]: time="2025-09-09T00:01:25.399443286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 00:01:25.399453 containerd[1510]: time="2025-09-09T00:01:25.399454497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 9 00:01:25.399560 containerd[1510]: time="2025-09-09T00:01:25.399466920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 00:01:25.399560 containerd[1510]: time="2025-09-09T00:01:25.399481107Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 9 00:01:25.399560 containerd[1510]: time="2025-09-09T00:01:25.399506254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 9 00:01:25.399560 containerd[1510]: time="2025-09-09T00:01:25.399520200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 00:01:25.399560 containerd[1510]: time="2025-09-09T00:01:25.399530860Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 00:01:25.399649 containerd[1510]: time="2025-09-09T00:01:25.399601122Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 00:01:25.399649 containerd[1510]: time="2025-09-09T00:01:25.399638522Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 9 00:01:25.399688 containerd[1510]: time="2025-09-09T00:01:25.399649342Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 00:01:25.399688 containerd[1510]: time="2025-09-09T00:01:25.399661796Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 9 00:01:25.399688 containerd[1510]: time="2025-09-09T00:01:25.399672806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 00:01:25.399688 containerd[1510]: time="2025-09-09T00:01:25.399685570Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 9 00:01:25.399766 containerd[1510]: time="2025-09-09T00:01:25.399698955Z" level=info msg="NRI interface is disabled by configuration." Sep 9 00:01:25.399766 containerd[1510]: time="2025-09-09T00:01:25.399710126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 00:01:25.400116 containerd[1510]: time="2025-09-09T00:01:25.400057748Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 00:01:25.400116 containerd[1510]: time="2025-09-09T00:01:25.400116078Z" level=info msg="Connect containerd service" Sep 9 00:01:25.400369 containerd[1510]: time="2025-09-09T00:01:25.400193012Z" level=info msg="using legacy CRI server" Sep 9 00:01:25.400369 containerd[1510]: time="2025-09-09T00:01:25.400201167Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:01:25.400369 containerd[1510]: time="2025-09-09T00:01:25.400343655Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 00:01:25.401418 containerd[1510]: time="2025-09-09T00:01:25.401380940Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:01:25.401735 containerd[1510]: time="2025-09-09T00:01:25.401709106Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:01:25.401782 containerd[1510]: time="2025-09-09T00:01:25.401762356Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:01:25.401861 containerd[1510]: time="2025-09-09T00:01:25.401815355Z" level=info msg="Start subscribing containerd event" Sep 9 00:01:25.401883 containerd[1510]: time="2025-09-09T00:01:25.401870719Z" level=info msg="Start recovering state" Sep 9 00:01:25.401966 containerd[1510]: time="2025-09-09T00:01:25.401942934Z" level=info msg="Start event monitor" Sep 9 00:01:25.401988 containerd[1510]: time="2025-09-09T00:01:25.401968522Z" level=info msg="Start snapshots syncer" Sep 9 00:01:25.401988 containerd[1510]: time="2025-09-09T00:01:25.401977670Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:01:25.401988 containerd[1510]: time="2025-09-09T00:01:25.401986135Z" level=info msg="Start streaming server" Sep 9 00:01:25.402189 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:01:25.403387 containerd[1510]: time="2025-09-09T00:01:25.403361175Z" level=info msg="containerd successfully booted in 0.053385s" Sep 9 00:01:25.409754 sshd_keygen[1498]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:01:25.422221 systemd-networkd[1423]: eth0: Gained IPv6LL Sep 9 00:01:25.427177 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:01:25.500141 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:01:25.518362 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 00:01:25.522934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:01:25.525494 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:01:25.527811 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:01:25.542290 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:01:25.555848 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:01:25.557541 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:01:25.557809 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:01:25.559363 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:01:25.559604 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 00:01:25.563604 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:01:25.571313 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:01:25.594290 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:01:25.603584 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:01:25.609150 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 00:01:25.610405 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:01:25.637140 tar[1505]: linux-amd64/README.md Sep 9 00:01:25.652462 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:01:26.753418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:01:26.755080 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:01:26.756318 systemd[1]: Startup finished in 1.528s (kernel) + 6.931s (initrd) + 5.086s (userspace) = 13.546s. Sep 9 00:01:26.758670 (kubelet)[1593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:01:27.252674 kubelet[1593]: E0909 00:01:27.252514 1593 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:01:27.256785 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:01:27.257004 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:01:27.257436 systemd[1]: kubelet.service: Consumed 1.576s CPU time, 269.4M memory peak. Sep 9 00:01:28.331844 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:01:28.333227 systemd[1]: Started sshd@0-10.0.0.133:22-10.0.0.1:43516.service - OpenSSH per-connection server daemon (10.0.0.1:43516). Sep 9 00:01:28.381826 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 43516 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:01:28.383650 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:01:28.390833 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:01:28.405263 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:01:28.411825 systemd-logind[1493]: New session 1 of user core. Sep 9 00:01:28.416971 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:01:28.439275 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:01:28.442057 (systemd)[1611]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:01:28.444468 systemd-logind[1493]: New session c1 of user core. Sep 9 00:01:28.593436 systemd[1611]: Queued start job for default target default.target. Sep 9 00:01:28.601400 systemd[1611]: Created slice app.slice - User Application Slice. Sep 9 00:01:28.601426 systemd[1611]: Reached target paths.target - Paths. Sep 9 00:01:28.601464 systemd[1611]: Reached target timers.target - Timers. Sep 9 00:01:28.602913 systemd[1611]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:01:28.613599 systemd[1611]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:01:28.613713 systemd[1611]: Reached target sockets.target - Sockets. Sep 9 00:01:28.613757 systemd[1611]: Reached target basic.target - Basic System. Sep 9 00:01:28.613802 systemd[1611]: Reached target default.target - Main User Target. Sep 9 00:01:28.613832 systemd[1611]: Startup finished in 161ms. Sep 9 00:01:28.614419 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:01:28.616325 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:01:28.686305 systemd[1]: Started sshd@1-10.0.0.133:22-10.0.0.1:43518.service - OpenSSH per-connection server daemon (10.0.0.1:43518). Sep 9 00:01:28.719052 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 43518 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:01:28.720575 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:01:28.724900 systemd-logind[1493]: New session 2 of user core. Sep 9 00:01:28.742170 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:01:28.796637 sshd[1624]: Connection closed by 10.0.0.1 port 43518 Sep 9 00:01:28.797087 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Sep 9 00:01:28.805852 systemd[1]: sshd@1-10.0.0.133:22-10.0.0.1:43518.service: Deactivated successfully. Sep 9 00:01:28.807917 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:01:28.809707 systemd-logind[1493]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:01:28.820285 systemd[1]: Started sshd@2-10.0.0.133:22-10.0.0.1:43522.service - OpenSSH per-connection server daemon (10.0.0.1:43522). Sep 9 00:01:28.822223 systemd-logind[1493]: Removed session 2. Sep 9 00:01:28.852952 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 43522 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:01:28.854237 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:01:28.859647 systemd-logind[1493]: New session 3 of user core. Sep 9 00:01:28.871565 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:01:28.920929 sshd[1632]: Connection closed by 10.0.0.1 port 43522 Sep 9 00:01:28.921312 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Sep 9 00:01:28.942815 systemd[1]: sshd@2-10.0.0.133:22-10.0.0.1:43522.service: Deactivated successfully. Sep 9 00:01:28.945007 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:01:28.947141 systemd-logind[1493]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:01:28.948784 systemd[1]: Started sshd@3-10.0.0.133:22-10.0.0.1:43528.service - OpenSSH per-connection server daemon (10.0.0.1:43528). Sep 9 00:01:28.950056 systemd-logind[1493]: Removed session 3. Sep 9 00:01:28.995878 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 43528 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:01:28.997793 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:01:29.003208 systemd-logind[1493]: New session 4 of user core. Sep 9 00:01:29.016188 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:01:29.071598 sshd[1640]: Connection closed by 10.0.0.1 port 43528 Sep 9 00:01:29.071959 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Sep 9 00:01:29.083867 systemd[1]: sshd@3-10.0.0.133:22-10.0.0.1:43528.service: Deactivated successfully. Sep 9 00:01:29.085886 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:01:29.087596 systemd-logind[1493]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:01:29.088838 systemd[1]: Started sshd@4-10.0.0.133:22-10.0.0.1:43536.service - OpenSSH per-connection server daemon (10.0.0.1:43536). Sep 9 00:01:29.089692 systemd-logind[1493]: Removed session 4. Sep 9 00:01:29.137511 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 43536 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:01:29.139310 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:01:29.143889 systemd-logind[1493]: New session 5 of user core. Sep 9 00:01:29.153182 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:01:29.212480 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:01:29.212814 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:01:29.230981 sudo[1649]: pam_unix(sudo:session): session closed for user root Sep 9 00:01:29.232717 sshd[1648]: Connection closed by 10.0.0.1 port 43536 Sep 9 00:01:29.233114 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Sep 9 00:01:29.256339 systemd[1]: sshd@4-10.0.0.133:22-10.0.0.1:43536.service: Deactivated successfully. Sep 9 00:01:29.258202 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:01:29.259882 systemd-logind[1493]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:01:29.271272 systemd[1]: Started sshd@5-10.0.0.133:22-10.0.0.1:43544.service - OpenSSH per-connection server daemon (10.0.0.1:43544). Sep 9 00:01:29.272168 systemd-logind[1493]: Removed session 5. Sep 9 00:01:29.303720 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 43544 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:01:29.305132 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:01:29.310587 systemd-logind[1493]: New session 6 of user core. Sep 9 00:01:29.320267 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:01:29.384703 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:01:29.385200 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:01:29.391573 sudo[1659]: pam_unix(sudo:session): session closed for user root Sep 9 00:01:29.401509 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 00:01:29.401933 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:01:29.434877 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:01:29.478293 augenrules[1681]: No rules Sep 9 00:01:29.480312 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:01:29.480608 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:01:29.481839 sudo[1658]: pam_unix(sudo:session): session closed for user root Sep 9 00:01:29.483514 sshd[1657]: Connection closed by 10.0.0.1 port 43544 Sep 9 00:01:29.483963 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Sep 9 00:01:29.496988 systemd[1]: sshd@5-10.0.0.133:22-10.0.0.1:43544.service: Deactivated successfully. Sep 9 00:01:29.499022 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:01:29.500663 systemd-logind[1493]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:01:29.510298 systemd[1]: Started sshd@6-10.0.0.133:22-10.0.0.1:43548.service - OpenSSH per-connection server daemon (10.0.0.1:43548). Sep 9 00:01:29.511221 systemd-logind[1493]: Removed session 6. Sep 9 00:01:29.542349 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 43548 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:01:29.543631 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:01:29.548023 systemd-logind[1493]: New session 7 of user core. Sep 9 00:01:29.561162 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:01:29.613549 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:01:29.613881 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:01:30.215287 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:01:30.215500 (dockerd)[1712]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:01:30.799932 dockerd[1712]: time="2025-09-09T00:01:30.799863290Z" level=info msg="Starting up" Sep 9 00:01:31.929384 systemd[1]: var-lib-docker-metacopy\x2dcheck1854766259-merged.mount: Deactivated successfully. Sep 9 00:01:32.069388 dockerd[1712]: time="2025-09-09T00:01:32.069324106Z" level=info msg="Loading containers: start." Sep 9 00:01:32.659056 kernel: Initializing XFRM netlink socket Sep 9 00:01:32.746613 systemd-networkd[1423]: docker0: Link UP Sep 9 00:01:32.977725 dockerd[1712]: time="2025-09-09T00:01:32.977586667Z" level=info msg="Loading containers: done." Sep 9 00:01:32.997055 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4109900356-merged.mount: Deactivated successfully. Sep 9 00:01:33.122903 dockerd[1712]: time="2025-09-09T00:01:33.122774127Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:01:33.123357 dockerd[1712]: time="2025-09-09T00:01:33.122992076Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 9 00:01:33.123357 dockerd[1712]: time="2025-09-09T00:01:33.123222618Z" level=info msg="Daemon has completed initialization" Sep 9 00:01:33.861959 dockerd[1712]: time="2025-09-09T00:01:33.861623026Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:01:33.861844 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:01:35.013123 containerd[1510]: time="2025-09-09T00:01:35.013005370Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 00:01:37.507419 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:01:37.521216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:01:37.714988 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:01:37.718930 (kubelet)[1918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:01:38.061595 kubelet[1918]: E0909 00:01:38.061515 1918 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:01:38.068303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:01:38.068552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:01:38.068956 systemd[1]: kubelet.service: Consumed 271ms CPU time, 113.1M memory peak. Sep 9 00:01:41.966457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3831247149.mount: Deactivated successfully. Sep 9 00:01:44.144966 containerd[1510]: time="2025-09-09T00:01:44.144883474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:01:44.145700 containerd[1510]: time="2025-09-09T00:01:44.145640433Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 9 00:01:44.146664 containerd[1510]: time="2025-09-09T00:01:44.146631943Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:01:44.149352 containerd[1510]: time="2025-09-09T00:01:44.149317611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:01:44.150527 containerd[1510]: time="2025-09-09T00:01:44.150457268Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 9.137335019s" Sep 9 00:01:44.150618 containerd[1510]: time="2025-09-09T00:01:44.150534784Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 9 00:01:44.151857 containerd[1510]: time="2025-09-09T00:01:44.151828831Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 00:01:46.478664 containerd[1510]: time="2025-09-09T00:01:46.478527543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:01:46.479702 containerd[1510]: time="2025-09-09T00:01:46.479623348Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 9 00:01:46.482897 containerd[1510]: time="2025-09-09T00:01:46.482840463Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:01:46.486846 containerd[1510]: time="2025-09-09T00:01:46.486795882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:01:46.488334 containerd[1510]: time="2025-09-09T00:01:46.488303179Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 2.336434664s" Sep 9 00:01:46.488384 containerd[1510]: time="2025-09-09T00:01:46.488337574Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 9 00:01:46.489220 containerd[1510]: time="2025-09-09T00:01:46.489187889Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 00:01:48.198899 containerd[1510]: time="2025-09-09T00:01:48.198833415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:01:48.199816 containerd[1510]: time="2025-09-09T00:01:48.199748962Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 9 00:01:48.201221 containerd[1510]: time="2025-09-09T00:01:48.201182511Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:01:48.205283 containerd[1510]: time="2025-09-09T00:01:48.205214314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:01:48.206682 containerd[1510]: time="2025-09-09T00:01:48.206631492Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 1.717403848s" Sep 9 00:01:48.206742 containerd[1510]: time="2025-09-09T00:01:48.206686335Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 9 00:01:48.207262 containerd[1510]: time="2025-09-09T00:01:48.207229884Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 00:01:48.319321 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:01:48.337112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:01:48.564579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:01:48.570586 (kubelet)[1996]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:01:49.081478 kubelet[1996]: E0909 00:01:49.081404 1996 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:01:49.086685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:01:49.086977 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:01:49.087483 systemd[1]: kubelet.service: Consumed 305ms CPU time, 111.4M memory peak. Sep 9 00:01:50.539211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3731272840.mount: Deactivated successfully. Sep 9 00:01:51.593094 containerd[1510]: time="2025-09-09T00:01:51.592987087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:01:51.593798 containerd[1510]: time="2025-09-09T00:01:51.593736643Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 9 00:01:51.595118 containerd[1510]: time="2025-09-09T00:01:51.595064203Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:01:51.607510 containerd[1510]: time="2025-09-09T00:01:51.607455806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:01:51.608291 containerd[1510]: time="2025-09-09T00:01:51.608243213Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 3.400977372s" Sep 9 00:01:51.608349 containerd[1510]: time="2025-09-09T00:01:51.608287146Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 9 00:01:51.608863 containerd[1510]: time="2025-09-09T00:01:51.608836756Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 00:01:52.193390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1453658488.mount: Deactivated successfully. Sep 9 00:01:53.650310 containerd[1510]: time="2025-09-09T00:01:53.650252268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:01:53.651103 containerd[1510]: time="2025-09-09T00:01:53.651051217Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 9 00:01:53.652636 containerd[1510]: time="2025-09-09T00:01:53.652597007Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:01:53.655723 containerd[1510]: time="2025-09-09T00:01:53.655690499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:01:53.656903 containerd[1510]: time="2025-09-09T00:01:53.656853751Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.047984413s" Sep 9 00:01:53.656903 containerd[1510]: time="2025-09-09T00:01:53.656888777Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 00:01:53.657434 containerd[1510]: time="2025-09-09T00:01:53.657403522Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:01:54.210427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount901636924.mount: Deactivated successfully. Sep 9 00:01:54.217378 containerd[1510]: time="2025-09-09T00:01:54.217308203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:01:54.218244 containerd[1510]: time="2025-09-09T00:01:54.218160482Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 00:01:54.219659 containerd[1510]: time="2025-09-09T00:01:54.219610772Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:01:54.223887 containerd[1510]: time="2025-09-09T00:01:54.223823184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:01:54.224866 containerd[1510]: time="2025-09-09T00:01:54.224466640Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 567.028283ms" Sep 9 00:01:54.224866 containerd[1510]: time="2025-09-09T00:01:54.224628785Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 00:01:54.225538 containerd[1510]: time="2025-09-09T00:01:54.225495560Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 00:01:54.937581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2279254595.mount: Deactivated successfully. Sep 9 00:01:57.597767 containerd[1510]: time="2025-09-09T00:01:57.597697381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:01:57.598539 containerd[1510]: time="2025-09-09T00:01:57.598479170Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 9 00:01:57.599913 containerd[1510]: time="2025-09-09T00:01:57.599873098Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:01:57.603247 containerd[1510]: time="2025-09-09T00:01:57.603211166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:01:57.604500 containerd[1510]: time="2025-09-09T00:01:57.604463933Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.378936903s" Sep 9 00:01:57.604573 containerd[1510]: time="2025-09-09T00:01:57.604501675Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 9 00:01:59.337357 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 00:01:59.350356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:01:59.588995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:01:59.593791 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:01:59.638206 kubelet[2153]: E0909 00:01:59.638094 2153 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:01:59.642293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:01:59.642539 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:01:59.643016 systemd[1]: kubelet.service: Consumed 215ms CPU time, 110.7M memory peak. Sep 9 00:02:00.134134 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:02:00.134292 systemd[1]: kubelet.service: Consumed 215ms CPU time, 110.7M memory peak. Sep 9 00:02:00.148235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:02:00.172824 systemd[1]: Reload requested from client PID 2168 ('systemctl') (unit session-7.scope)... Sep 9 00:02:00.172846 systemd[1]: Reloading... Sep 9 00:02:00.262059 zram_generator::config[2213]: No configuration found. Sep 9 00:02:00.949765 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:02:01.053394 systemd[1]: Reloading finished in 880 ms. Sep 9 00:02:01.098210 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:02:01.101756 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:02:01.102511 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:02:01.102781 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:02:01.102815 systemd[1]: kubelet.service: Consumed 149ms CPU time, 98.3M memory peak. Sep 9 00:02:01.104286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:02:01.282633 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:02:01.286568 (kubelet)[2262]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:02:01.335085 kubelet[2262]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:02:01.335085 kubelet[2262]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:02:01.335085 kubelet[2262]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:02:01.335484 kubelet[2262]: I0909 00:02:01.335126 2262 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:02:01.784073 kubelet[2262]: I0909 00:02:01.784022 2262 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 00:02:01.784073 kubelet[2262]: I0909 00:02:01.784071 2262 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:02:01.785725 kubelet[2262]: I0909 00:02:01.784715 2262 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 00:02:01.810808 kubelet[2262]: E0909 00:02:01.810758 2262 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:02:01.812079 kubelet[2262]: I0909 00:02:01.812025 2262 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:02:01.817499 kubelet[2262]: E0909 00:02:01.817464 2262 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:02:01.817499 kubelet[2262]: I0909 00:02:01.817495 2262 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:02:01.823227 kubelet[2262]: I0909 00:02:01.823198 2262 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:02:01.824604 kubelet[2262]: I0909 00:02:01.824553 2262 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:02:01.824821 kubelet[2262]: I0909 00:02:01.824603 2262 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:02:01.824948 kubelet[2262]: I0909 00:02:01.824828 2262 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:02:01.824948 kubelet[2262]: I0909 00:02:01.824838 2262 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 00:02:01.825025 kubelet[2262]: I0909 00:02:01.825006 2262 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:02:01.827898 kubelet[2262]: I0909 00:02:01.827871 2262 kubelet.go:446] "Attempting to sync node with API server" Sep 9 00:02:01.827941 kubelet[2262]: I0909 00:02:01.827926 2262 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:02:01.827966 kubelet[2262]: I0909 00:02:01.827952 2262 kubelet.go:352] "Adding apiserver pod source" Sep 9 00:02:01.827966 kubelet[2262]: I0909 00:02:01.827963 2262 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:02:01.833334 kubelet[2262]: I0909 00:02:01.832783 2262 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 9 00:02:01.833334 kubelet[2262]: W0909 00:02:01.833091 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Sep 9 00:02:01.833334 kubelet[2262]: E0909 00:02:01.833132 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:02:01.833334 kubelet[2262]: I0909 00:02:01.833191 2262 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:02:01.833334 kubelet[2262]: W0909 00:02:01.833208 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Sep 9 00:02:01.833334 kubelet[2262]: E0909 00:02:01.833238 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:02:01.833954 kubelet[2262]: W0909 00:02:01.833938 2262 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:02:01.838665 kubelet[2262]: I0909 00:02:01.838622 2262 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:02:01.838665 kubelet[2262]: I0909 00:02:01.838668 2262 server.go:1287] "Started kubelet" Sep 9 00:02:01.841259 kubelet[2262]: I0909 00:02:01.839011 2262 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:02:01.841259 kubelet[2262]: I0909 00:02:01.840051 2262 server.go:479] "Adding debug handlers to kubelet server" Sep 9 00:02:01.841259 kubelet[2262]: I0909 00:02:01.840449 2262 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:02:01.841530 kubelet[2262]: I0909 00:02:01.841505 2262 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:02:01.841607 kubelet[2262]: I0909 00:02:01.841563 2262 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:02:01.841833 kubelet[2262]: I0909 00:02:01.841816 2262 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:02:01.842815 kubelet[2262]: E0909 00:02:01.842146 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:01.842815 kubelet[2262]: I0909 00:02:01.842180 2262 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:02:01.842815 kubelet[2262]: I0909 00:02:01.842333 2262 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:02:01.842815 kubelet[2262]: I0909 00:02:01.842400 2262 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:02:01.842815 kubelet[2262]: W0909 00:02:01.842638 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Sep 9 00:02:01.842815 kubelet[2262]: E0909 00:02:01.842671 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:02:01.844979 kubelet[2262]: E0909 00:02:01.843760 2262 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.133:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.133:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863744596f23264 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:02:01.83864586 +0000 UTC m=+0.546456943,LastTimestamp:2025-09-09 00:02:01.83864586 +0000 UTC m=+0.546456943,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:02:01.845236 kubelet[2262]: I0909 00:02:01.845209 2262 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:02:01.845709 kubelet[2262]: I0909 00:02:01.845690 2262 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:02:01.845752 kubelet[2262]: E0909 00:02:01.845727 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="200ms" Sep 9 00:02:01.845752 kubelet[2262]: E0909 00:02:01.845492 2262 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:02:01.845803 kubelet[2262]: I0909 00:02:01.845781 2262 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:02:01.860120 kubelet[2262]: I0909 00:02:01.860073 2262 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:02:01.863903 kubelet[2262]: I0909 00:02:01.863885 2262 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:02:01.863985 kubelet[2262]: I0909 00:02:01.863976 2262 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:02:01.864081 kubelet[2262]: I0909 00:02:01.864071 2262 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:02:01.864253 kubelet[2262]: I0909 00:02:01.864100 2262 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:02:01.864305 kubelet[2262]: I0909 00:02:01.864272 2262 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 00:02:01.864345 kubelet[2262]: I0909 00:02:01.864305 2262 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:02:01.864345 kubelet[2262]: I0909 00:02:01.864313 2262 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 00:02:01.864396 kubelet[2262]: E0909 00:02:01.864367 2262 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:02:01.864962 kubelet[2262]: W0909 00:02:01.864933 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Sep 9 00:02:01.865006 kubelet[2262]: E0909 00:02:01.864968 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:02:01.942402 kubelet[2262]: E0909 00:02:01.942320 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:01.964808 kubelet[2262]: E0909 00:02:01.964750 2262 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:02:02.043181 kubelet[2262]: E0909 00:02:02.042996 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:02.046734 kubelet[2262]: E0909 00:02:02.046683 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="400ms" Sep 9 00:02:02.144020 kubelet[2262]: E0909 00:02:02.143948 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:02.165344 kubelet[2262]: E0909 00:02:02.165296 2262 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:02:02.244757 kubelet[2262]: E0909 00:02:02.244697 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:02.345144 kubelet[2262]: E0909 00:02:02.344935 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:02.445641 kubelet[2262]: E0909 00:02:02.445550 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:02.448104 kubelet[2262]: E0909 00:02:02.448074 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="800ms" Sep 9 00:02:02.535118 kubelet[2262]: I0909 00:02:02.535064 2262 policy_none.go:49] "None policy: Start" Sep 9 00:02:02.535118 kubelet[2262]: I0909 00:02:02.535103 2262 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:02:02.535118 kubelet[2262]: I0909 00:02:02.535117 2262 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:02:02.546582 kubelet[2262]: E0909 00:02:02.546517 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:02.565807 kubelet[2262]: E0909 00:02:02.565739 2262 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:02:02.647416 kubelet[2262]: E0909 00:02:02.647259 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:02.748057 kubelet[2262]: E0909 00:02:02.747968 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:02.848774 kubelet[2262]: E0909 00:02:02.848721 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:02.885787 kubelet[2262]: W0909 00:02:02.885729 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Sep 9 00:02:02.885895 kubelet[2262]: E0909 00:02:02.885860 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:02:02.955115 kubelet[2262]: E0909 00:02:02.948805 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:02.997771 kubelet[2262]: W0909 00:02:02.997664 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Sep 9 00:02:02.997771 kubelet[2262]: E0909 00:02:02.997721 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:02:03.047464 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 00:02:03.055222 kubelet[2262]: E0909 00:02:03.055148 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:03.064547 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 00:02:03.073893 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 00:02:03.075119 kubelet[2262]: I0909 00:02:03.075088 2262 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:02:03.075350 kubelet[2262]: I0909 00:02:03.075331 2262 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:02:03.075421 kubelet[2262]: I0909 00:02:03.075353 2262 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:02:03.075785 kubelet[2262]: I0909 00:02:03.075663 2262 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:02:03.076748 kubelet[2262]: E0909 00:02:03.076725 2262 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:02:03.076799 kubelet[2262]: E0909 00:02:03.076773 2262 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:02:03.078495 kubelet[2262]: W0909 00:02:03.078462 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Sep 9 00:02:03.078569 kubelet[2262]: E0909 00:02:03.078494 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:02:03.177430 kubelet[2262]: I0909 00:02:03.177360 2262 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:02:03.177897 kubelet[2262]: E0909 00:02:03.177869 2262 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Sep 9 00:02:03.233351 kubelet[2262]: W0909 00:02:03.233280 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Sep 9 00:02:03.233351 kubelet[2262]: E0909 00:02:03.233308 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:02:03.249322 kubelet[2262]: E0909 00:02:03.249274 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="1.6s" Sep 9 00:02:03.374652 systemd[1]: Created slice kubepods-burstable-pod6bb714ac617cda25f7fa5fa201e3bfec.slice - libcontainer container kubepods-burstable-pod6bb714ac617cda25f7fa5fa201e3bfec.slice. Sep 9 00:02:03.379560 kubelet[2262]: I0909 00:02:03.379516 2262 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:02:03.379945 kubelet[2262]: E0909 00:02:03.379909 2262 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Sep 9 00:02:03.382525 kubelet[2262]: E0909 00:02:03.382495 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:02:03.387675 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 9 00:02:03.389562 kubelet[2262]: E0909 00:02:03.389517 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:02:03.399942 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 9 00:02:03.402072 kubelet[2262]: E0909 00:02:03.402011 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:02:03.450654 kubelet[2262]: I0909 00:02:03.450556 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6bb714ac617cda25f7fa5fa201e3bfec-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6bb714ac617cda25f7fa5fa201e3bfec\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:02:03.450654 kubelet[2262]: I0909 00:02:03.450622 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6bb714ac617cda25f7fa5fa201e3bfec-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6bb714ac617cda25f7fa5fa201e3bfec\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:02:03.450654 kubelet[2262]: I0909 00:02:03.450651 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:02:03.450654 kubelet[2262]: I0909 00:02:03.450666 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:02:03.450940 kubelet[2262]: I0909 00:02:03.450683 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:02:03.450940 kubelet[2262]: I0909 00:02:03.450696 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6bb714ac617cda25f7fa5fa201e3bfec-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6bb714ac617cda25f7fa5fa201e3bfec\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:02:03.450940 kubelet[2262]: I0909 00:02:03.450710 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:02:03.450940 kubelet[2262]: I0909 00:02:03.450723 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:02:03.450940 kubelet[2262]: I0909 00:02:03.450739 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:02:03.684100 kubelet[2262]: E0909 00:02:03.683988 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:03.684862 containerd[1510]: time="2025-09-09T00:02:03.684820590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6bb714ac617cda25f7fa5fa201e3bfec,Namespace:kube-system,Attempt:0,}" Sep 9 00:02:03.690934 kubelet[2262]: E0909 00:02:03.690898 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:03.691408 containerd[1510]: time="2025-09-09T00:02:03.691376293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 9 00:02:03.702719 kubelet[2262]: E0909 00:02:03.702689 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:03.703909 containerd[1510]: time="2025-09-09T00:02:03.703886249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 9 00:02:03.781518 kubelet[2262]: I0909 00:02:03.781490 2262 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:02:03.781894 kubelet[2262]: E0909 00:02:03.781849 2262 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Sep 9 00:02:03.924421 kubelet[2262]: E0909 00:02:03.924388 2262 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:02:04.197144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount550770868.mount: Deactivated successfully. Sep 9 00:02:04.206348 containerd[1510]: time="2025-09-09T00:02:04.206278240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:02:04.209921 containerd[1510]: time="2025-09-09T00:02:04.209834169Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 9 00:02:04.211101 containerd[1510]: time="2025-09-09T00:02:04.211053949Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:02:04.213020 containerd[1510]: time="2025-09-09T00:02:04.212986900Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:02:04.214111 containerd[1510]: time="2025-09-09T00:02:04.213925302Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:02:04.215331 containerd[1510]: time="2025-09-09T00:02:04.215284988Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:02:04.216150 containerd[1510]: time="2025-09-09T00:02:04.216103822Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:02:04.217363 containerd[1510]: time="2025-09-09T00:02:04.217331025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:02:04.218213 containerd[1510]: time="2025-09-09T00:02:04.218170338Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 533.222975ms" Sep 9 00:02:04.219693 containerd[1510]: time="2025-09-09T00:02:04.219645866Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 528.211391ms" Sep 9 00:02:04.224147 containerd[1510]: time="2025-09-09T00:02:04.224103718Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 520.156792ms" Sep 9 00:02:04.451093 containerd[1510]: time="2025-09-09T00:02:04.450773941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:02:04.452006 containerd[1510]: time="2025-09-09T00:02:04.449839076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:02:04.452006 containerd[1510]: time="2025-09-09T00:02:04.451261091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:02:04.452006 containerd[1510]: time="2025-09-09T00:02:04.451859324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:02:04.452006 containerd[1510]: time="2025-09-09T00:02:04.451880764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:04.452006 containerd[1510]: time="2025-09-09T00:02:04.451874993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:04.452006 containerd[1510]: time="2025-09-09T00:02:04.451971808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:04.452290 containerd[1510]: time="2025-09-09T00:02:04.452142845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:04.460163 containerd[1510]: time="2025-09-09T00:02:04.459969340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:02:04.460302 containerd[1510]: time="2025-09-09T00:02:04.460179612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:02:04.460302 containerd[1510]: time="2025-09-09T00:02:04.460244285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:04.460514 containerd[1510]: time="2025-09-09T00:02:04.460461069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:04.533388 systemd[1]: Started cri-containerd-c848d73958d4c04354a721beb80459732ab1ccbc9b6fbea1cad4f591be122f9a.scope - libcontainer container c848d73958d4c04354a721beb80459732ab1ccbc9b6fbea1cad4f591be122f9a. Sep 9 00:02:04.538735 systemd[1]: Started cri-containerd-1ad8b17466d25494e0dd8551e575346e2f38c62f2a0adceba1ef14f6d1b7cd3e.scope - libcontainer container 1ad8b17466d25494e0dd8551e575346e2f38c62f2a0adceba1ef14f6d1b7cd3e. Sep 9 00:02:04.542155 systemd[1]: Started cri-containerd-40dea89626febd77a96126418806ec917f000f5f88ef7547b85d3fcbe98277a3.scope - libcontainer container 40dea89626febd77a96126418806ec917f000f5f88ef7547b85d3fcbe98277a3. Sep 9 00:02:04.583940 kubelet[2262]: I0909 00:02:04.583540 2262 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:02:04.583940 kubelet[2262]: E0909 00:02:04.583895 2262 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Sep 9 00:02:04.621161 containerd[1510]: time="2025-09-09T00:02:04.618808087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6bb714ac617cda25f7fa5fa201e3bfec,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ad8b17466d25494e0dd8551e575346e2f38c62f2a0adceba1ef14f6d1b7cd3e\"" Sep 9 00:02:04.627833 kubelet[2262]: E0909 00:02:04.627427 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:04.633137 containerd[1510]: time="2025-09-09T00:02:04.633060643Z" level=info msg="CreateContainer within sandbox \"1ad8b17466d25494e0dd8551e575346e2f38c62f2a0adceba1ef14f6d1b7cd3e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:02:04.677659 containerd[1510]: time="2025-09-09T00:02:04.677545198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"40dea89626febd77a96126418806ec917f000f5f88ef7547b85d3fcbe98277a3\"" Sep 9 00:02:04.678747 kubelet[2262]: E0909 00:02:04.678504 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:04.681385 containerd[1510]: time="2025-09-09T00:02:04.681344462Z" level=info msg="CreateContainer within sandbox \"1ad8b17466d25494e0dd8551e575346e2f38c62f2a0adceba1ef14f6d1b7cd3e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8e98cf62795181ff8e2d98653b11dacafcbb6a9d6a1452c4e996512f07c2fa12\"" Sep 9 00:02:04.681885 containerd[1510]: time="2025-09-09T00:02:04.681815812Z" level=info msg="CreateContainer within sandbox \"40dea89626febd77a96126418806ec917f000f5f88ef7547b85d3fcbe98277a3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:02:04.683125 containerd[1510]: time="2025-09-09T00:02:04.682652731Z" level=info msg="StartContainer for \"8e98cf62795181ff8e2d98653b11dacafcbb6a9d6a1452c4e996512f07c2fa12\"" Sep 9 00:02:04.684394 containerd[1510]: time="2025-09-09T00:02:04.684360472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c848d73958d4c04354a721beb80459732ab1ccbc9b6fbea1cad4f591be122f9a\"" Sep 9 00:02:04.685486 kubelet[2262]: E0909 00:02:04.685438 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:04.687936 containerd[1510]: time="2025-09-09T00:02:04.687882427Z" level=info msg="CreateContainer within sandbox \"c848d73958d4c04354a721beb80459732ab1ccbc9b6fbea1cad4f591be122f9a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:02:04.705173 containerd[1510]: time="2025-09-09T00:02:04.703810511Z" level=info msg="CreateContainer within sandbox \"40dea89626febd77a96126418806ec917f000f5f88ef7547b85d3fcbe98277a3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b96022e3425267a49bfd9671db32597915d4e36857ae155671588ebb5b196f73\"" Sep 9 00:02:04.705173 containerd[1510]: time="2025-09-09T00:02:04.704477195Z" level=info msg="StartContainer for \"b96022e3425267a49bfd9671db32597915d4e36857ae155671588ebb5b196f73\"" Sep 9 00:02:04.725725 containerd[1510]: time="2025-09-09T00:02:04.725610730Z" level=info msg="CreateContainer within sandbox \"c848d73958d4c04354a721beb80459732ab1ccbc9b6fbea1cad4f591be122f9a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3e83fe3c6f9b61eb61503c48439a0d8cc65c5467195e8bf3a522eb8a8f16ba5e\"" Sep 9 00:02:04.729113 containerd[1510]: time="2025-09-09T00:02:04.727210505Z" level=info msg="StartContainer for \"3e83fe3c6f9b61eb61503c48439a0d8cc65c5467195e8bf3a522eb8a8f16ba5e\"" Sep 9 00:02:04.778015 systemd[1]: Started cri-containerd-8e98cf62795181ff8e2d98653b11dacafcbb6a9d6a1452c4e996512f07c2fa12.scope - libcontainer container 8e98cf62795181ff8e2d98653b11dacafcbb6a9d6a1452c4e996512f07c2fa12. Sep 9 00:02:04.788266 systemd[1]: Started cri-containerd-b96022e3425267a49bfd9671db32597915d4e36857ae155671588ebb5b196f73.scope - libcontainer container b96022e3425267a49bfd9671db32597915d4e36857ae155671588ebb5b196f73. Sep 9 00:02:04.824329 systemd[1]: Started cri-containerd-3e83fe3c6f9b61eb61503c48439a0d8cc65c5467195e8bf3a522eb8a8f16ba5e.scope - libcontainer container 3e83fe3c6f9b61eb61503c48439a0d8cc65c5467195e8bf3a522eb8a8f16ba5e. Sep 9 00:02:04.837288 kubelet[2262]: W0909 00:02:04.836213 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Sep 9 00:02:04.837288 kubelet[2262]: E0909 00:02:04.836279 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:02:04.847820 kubelet[2262]: W0909 00:02:04.847716 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Sep 9 00:02:04.847942 kubelet[2262]: E0909 00:02:04.847834 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:02:04.850589 kubelet[2262]: E0909 00:02:04.850556 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="3.2s" Sep 9 00:02:04.853261 containerd[1510]: time="2025-09-09T00:02:04.853225975Z" level=info msg="StartContainer for \"b96022e3425267a49bfd9671db32597915d4e36857ae155671588ebb5b196f73\" returns successfully" Sep 9 00:02:04.878771 kubelet[2262]: E0909 00:02:04.878173 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:02:04.878771 kubelet[2262]: E0909 00:02:04.878342 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:04.881375 containerd[1510]: time="2025-09-09T00:02:04.881319751Z" level=info msg="StartContainer for \"8e98cf62795181ff8e2d98653b11dacafcbb6a9d6a1452c4e996512f07c2fa12\" returns successfully" Sep 9 00:02:04.916934 containerd[1510]: time="2025-09-09T00:02:04.916808448Z" level=info msg="StartContainer for \"3e83fe3c6f9b61eb61503c48439a0d8cc65c5467195e8bf3a522eb8a8f16ba5e\" returns successfully" Sep 9 00:02:05.888446 kubelet[2262]: E0909 00:02:05.888061 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:02:05.888446 kubelet[2262]: E0909 00:02:05.888354 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:05.891018 kubelet[2262]: E0909 00:02:05.890970 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:02:05.891263 kubelet[2262]: E0909 00:02:05.891222 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:06.186222 kubelet[2262]: I0909 00:02:06.185849 2262 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:02:06.708567 kubelet[2262]: I0909 00:02:06.708330 2262 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:02:06.708567 kubelet[2262]: E0909 00:02:06.708382 2262 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 00:02:06.728687 kubelet[2262]: E0909 00:02:06.728644 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:06.829315 kubelet[2262]: E0909 00:02:06.829247 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:06.892139 kubelet[2262]: E0909 00:02:06.892107 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:02:06.892534 kubelet[2262]: E0909 00:02:06.892184 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:02:06.892534 kubelet[2262]: E0909 00:02:06.892224 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:06.892534 kubelet[2262]: E0909 00:02:06.892294 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:06.930353 kubelet[2262]: E0909 00:02:06.930309 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:07.030844 kubelet[2262]: E0909 00:02:07.030620 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:07.131679 kubelet[2262]: E0909 00:02:07.131592 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:07.235063 kubelet[2262]: E0909 00:02:07.232858 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:07.333926 kubelet[2262]: E0909 00:02:07.333729 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:07.434623 kubelet[2262]: E0909 00:02:07.434564 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:07.535286 kubelet[2262]: E0909 00:02:07.535225 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:07.636117 kubelet[2262]: E0909 00:02:07.635951 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:07.736352 kubelet[2262]: E0909 00:02:07.736283 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:07.836781 kubelet[2262]: E0909 00:02:07.836730 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:07.894366 kubelet[2262]: E0909 00:02:07.894178 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:02:07.894366 kubelet[2262]: E0909 00:02:07.894319 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:07.894851 kubelet[2262]: E0909 00:02:07.894391 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:02:07.894851 kubelet[2262]: E0909 00:02:07.894560 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:07.937251 kubelet[2262]: E0909 00:02:07.937192 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:08.038152 kubelet[2262]: E0909 00:02:08.038100 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:08.138916 kubelet[2262]: E0909 00:02:08.138843 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:08.239673 kubelet[2262]: E0909 00:02:08.239629 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:08.339925 kubelet[2262]: E0909 00:02:08.339873 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:08.440644 kubelet[2262]: E0909 00:02:08.440569 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:08.541586 kubelet[2262]: E0909 00:02:08.541439 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:08.642331 kubelet[2262]: E0909 00:02:08.642267 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:08.736091 systemd[1]: Reload requested from client PID 2541 ('systemctl') (unit session-7.scope)... Sep 9 00:02:08.736110 systemd[1]: Reloading... Sep 9 00:02:08.742516 kubelet[2262]: E0909 00:02:08.742468 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:08.836070 zram_generator::config[2588]: No configuration found. Sep 9 00:02:08.843390 kubelet[2262]: E0909 00:02:08.843343 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:08.943532 kubelet[2262]: E0909 00:02:08.943499 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:08.967890 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:02:09.045290 kubelet[2262]: I0909 00:02:09.045262 2262 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:02:09.054327 kubelet[2262]: I0909 00:02:09.054279 2262 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:02:09.058752 kubelet[2262]: I0909 00:02:09.058693 2262 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:02:09.087763 systemd[1]: Reloading finished in 351 ms. Sep 9 00:02:09.109930 kubelet[2262]: I0909 00:02:09.109823 2262 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:02:09.109877 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:02:09.137696 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:02:09.138104 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:02:09.138164 systemd[1]: kubelet.service: Consumed 1.095s CPU time, 135.9M memory peak. Sep 9 00:02:09.148304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:02:09.323683 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:02:09.328220 (kubelet)[2630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:02:09.371160 kubelet[2630]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:02:09.371160 kubelet[2630]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:02:09.371160 kubelet[2630]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:02:09.371800 kubelet[2630]: I0909 00:02:09.371151 2630 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:02:09.379909 kubelet[2630]: I0909 00:02:09.379851 2630 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 00:02:09.379909 kubelet[2630]: I0909 00:02:09.379883 2630 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:02:09.380209 kubelet[2630]: I0909 00:02:09.380182 2630 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 00:02:09.381549 kubelet[2630]: I0909 00:02:09.381520 2630 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 00:02:09.383552 kubelet[2630]: I0909 00:02:09.383509 2630 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:02:09.386694 kubelet[2630]: E0909 00:02:09.386669 2630 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:02:09.386694 kubelet[2630]: I0909 00:02:09.386694 2630 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:02:09.393381 kubelet[2630]: I0909 00:02:09.393353 2630 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:02:09.393812 kubelet[2630]: I0909 00:02:09.393765 2630 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:02:09.394147 kubelet[2630]: I0909 00:02:09.393811 2630 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:02:09.394233 kubelet[2630]: I0909 00:02:09.394157 2630 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:02:09.394233 kubelet[2630]: I0909 00:02:09.394168 2630 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 00:02:09.394233 kubelet[2630]: I0909 00:02:09.394227 2630 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:02:09.394451 kubelet[2630]: I0909 00:02:09.394431 2630 kubelet.go:446] "Attempting to sync node with API server" Sep 9 00:02:09.394667 kubelet[2630]: I0909 00:02:09.394645 2630 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:02:09.394718 kubelet[2630]: I0909 00:02:09.394677 2630 kubelet.go:352] "Adding apiserver pod source" Sep 9 00:02:09.394718 kubelet[2630]: I0909 00:02:09.394689 2630 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:02:09.395661 kubelet[2630]: I0909 00:02:09.395624 2630 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 9 00:02:09.399049 kubelet[2630]: I0909 00:02:09.396379 2630 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:02:09.399049 kubelet[2630]: I0909 00:02:09.397209 2630 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:02:09.399049 kubelet[2630]: I0909 00:02:09.397244 2630 server.go:1287] "Started kubelet" Sep 9 00:02:09.399049 kubelet[2630]: I0909 00:02:09.398110 2630 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:02:09.399049 kubelet[2630]: I0909 00:02:09.398426 2630 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:02:09.399049 kubelet[2630]: I0909 00:02:09.398472 2630 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:02:09.399244 kubelet[2630]: I0909 00:02:09.399215 2630 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:02:09.399420 kubelet[2630]: I0909 00:02:09.399394 2630 server.go:479] "Adding debug handlers to kubelet server" Sep 9 00:02:09.400884 kubelet[2630]: I0909 00:02:09.400856 2630 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:02:09.404560 kubelet[2630]: E0909 00:02:09.404526 2630 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:02:09.404703 kubelet[2630]: I0909 00:02:09.404573 2630 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:02:09.404769 kubelet[2630]: I0909 00:02:09.404751 2630 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:02:09.404892 kubelet[2630]: I0909 00:02:09.404879 2630 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:02:09.408270 kubelet[2630]: I0909 00:02:09.408243 2630 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:02:09.408748 kubelet[2630]: E0909 00:02:09.408715 2630 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:02:09.409788 kubelet[2630]: I0909 00:02:09.409764 2630 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:02:09.409830 kubelet[2630]: I0909 00:02:09.409795 2630 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:02:09.419406 kubelet[2630]: I0909 00:02:09.418611 2630 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:02:09.420980 kubelet[2630]: I0909 00:02:09.420955 2630 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:02:09.421219 kubelet[2630]: I0909 00:02:09.421194 2630 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 00:02:09.421290 kubelet[2630]: I0909 00:02:09.421225 2630 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:02:09.421290 kubelet[2630]: I0909 00:02:09.421233 2630 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 00:02:09.421290 kubelet[2630]: E0909 00:02:09.421285 2630 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:02:09.449469 kubelet[2630]: I0909 00:02:09.449434 2630 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:02:09.449469 kubelet[2630]: I0909 00:02:09.449459 2630 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:02:09.449644 kubelet[2630]: I0909 00:02:09.449497 2630 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:02:09.449763 kubelet[2630]: I0909 00:02:09.449739 2630 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:02:09.449805 kubelet[2630]: I0909 00:02:09.449759 2630 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:02:09.449805 kubelet[2630]: I0909 00:02:09.449790 2630 policy_none.go:49] "None policy: Start" Sep 9 00:02:09.449853 kubelet[2630]: I0909 00:02:09.449807 2630 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:02:09.449853 kubelet[2630]: I0909 00:02:09.449824 2630 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:02:09.449964 kubelet[2630]: I0909 00:02:09.449948 2630 state_mem.go:75] "Updated machine memory state" Sep 9 00:02:09.454040 kubelet[2630]: I0909 00:02:09.454013 2630 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:02:09.454323 kubelet[2630]: I0909 00:02:09.454197 2630 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:02:09.454323 kubelet[2630]: I0909 00:02:09.454211 2630 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:02:09.454409 kubelet[2630]: I0909 00:02:09.454395 2630 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:02:09.455587 kubelet[2630]: E0909 00:02:09.455462 2630 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:02:09.522685 kubelet[2630]: I0909 00:02:09.522601 2630 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:02:09.522824 kubelet[2630]: I0909 00:02:09.522725 2630 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:02:09.522824 kubelet[2630]: I0909 00:02:09.522739 2630 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:02:09.528668 kubelet[2630]: E0909 00:02:09.528539 2630 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:02:09.528668 kubelet[2630]: E0909 00:02:09.528615 2630 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:02:09.528668 kubelet[2630]: E0909 00:02:09.528660 2630 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:02:09.561255 kubelet[2630]: I0909 00:02:09.560180 2630 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:02:09.565292 kubelet[2630]: I0909 00:02:09.565255 2630 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 00:02:09.565385 kubelet[2630]: I0909 00:02:09.565367 2630 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:02:09.606684 kubelet[2630]: I0909 00:02:09.606591 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:02:09.606684 kubelet[2630]: I0909 00:02:09.606648 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:02:09.606998 kubelet[2630]: I0909 00:02:09.606725 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6bb714ac617cda25f7fa5fa201e3bfec-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6bb714ac617cda25f7fa5fa201e3bfec\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:02:09.606998 kubelet[2630]: I0909 00:02:09.606791 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6bb714ac617cda25f7fa5fa201e3bfec-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6bb714ac617cda25f7fa5fa201e3bfec\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:02:09.606998 kubelet[2630]: I0909 00:02:09.606811 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:02:09.606998 kubelet[2630]: I0909 00:02:09.606862 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:02:09.606998 kubelet[2630]: I0909 00:02:09.606879 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6bb714ac617cda25f7fa5fa201e3bfec-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6bb714ac617cda25f7fa5fa201e3bfec\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:02:09.607178 kubelet[2630]: I0909 00:02:09.606914 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:02:09.607178 kubelet[2630]: I0909 00:02:09.606970 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:02:09.829982 kubelet[2630]: E0909 00:02:09.829569 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:09.829982 kubelet[2630]: E0909 00:02:09.829753 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:09.830242 kubelet[2630]: E0909 00:02:09.830216 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:10.211865 update_engine[1499]: I20250909 00:02:10.211764 1499 update_attempter.cc:509] Updating boot flags... Sep 9 00:02:10.395723 kubelet[2630]: I0909 00:02:10.395681 2630 apiserver.go:52] "Watching apiserver" Sep 9 00:02:10.405158 kubelet[2630]: I0909 00:02:10.405103 2630 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:02:10.408076 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2679) Sep 9 00:02:10.436058 kubelet[2630]: E0909 00:02:10.433235 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:10.436058 kubelet[2630]: I0909 00:02:10.434536 2630 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:02:10.436058 kubelet[2630]: I0909 00:02:10.434741 2630 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:02:10.476076 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2680) Sep 9 00:02:10.500090 kubelet[2630]: E0909 00:02:10.497662 2630 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:02:10.500090 kubelet[2630]: E0909 00:02:10.497508 2630 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:02:10.500090 kubelet[2630]: E0909 00:02:10.497962 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:10.500090 kubelet[2630]: E0909 00:02:10.498733 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:10.537860 kubelet[2630]: I0909 00:02:10.536939 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5368958799999999 podStartE2EDuration="1.53689588s" podCreationTimestamp="2025-09-09 00:02:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:02:10.525315794 +0000 UTC m=+1.192886615" watchObservedRunningTime="2025-09-09 00:02:10.53689588 +0000 UTC m=+1.204466701" Sep 9 00:02:10.537860 kubelet[2630]: I0909 00:02:10.537307 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5373002690000002 podStartE2EDuration="1.537300269s" podCreationTimestamp="2025-09-09 00:02:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:02:10.536166435 +0000 UTC m=+1.203737256" watchObservedRunningTime="2025-09-09 00:02:10.537300269 +0000 UTC m=+1.204871090" Sep 9 00:02:10.548755 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2680) Sep 9 00:02:11.434012 kubelet[2630]: E0909 00:02:11.433981 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:11.434570 kubelet[2630]: E0909 00:02:11.434133 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:12.436612 kubelet[2630]: E0909 00:02:12.436564 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:13.437940 kubelet[2630]: E0909 00:02:13.437903 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:13.472899 kubelet[2630]: E0909 00:02:13.472861 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:14.181541 kubelet[2630]: I0909 00:02:14.181506 2630 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:02:14.181877 containerd[1510]: time="2025-09-09T00:02:14.181839316Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:02:14.182274 kubelet[2630]: I0909 00:02:14.182025 2630 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:02:14.651018 kubelet[2630]: E0909 00:02:14.650973 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:14.663349 kubelet[2630]: I0909 00:02:14.663291 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.663270902 podStartE2EDuration="5.663270902s" podCreationTimestamp="2025-09-09 00:02:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:02:10.553872444 +0000 UTC m=+1.221443265" watchObservedRunningTime="2025-09-09 00:02:14.663270902 +0000 UTC m=+5.330841723" Sep 9 00:02:15.111482 systemd[1]: Created slice kubepods-besteffort-podd87562ec_3481_4932_ad85_b95b983b6a95.slice - libcontainer container kubepods-besteffort-podd87562ec_3481_4932_ad85_b95b983b6a95.slice. Sep 9 00:02:15.139270 kubelet[2630]: I0909 00:02:15.139211 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d87562ec-3481-4932-ad85-b95b983b6a95-kube-proxy\") pod \"kube-proxy-wqrwm\" (UID: \"d87562ec-3481-4932-ad85-b95b983b6a95\") " pod="kube-system/kube-proxy-wqrwm" Sep 9 00:02:15.139270 kubelet[2630]: I0909 00:02:15.139251 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d87562ec-3481-4932-ad85-b95b983b6a95-lib-modules\") pod \"kube-proxy-wqrwm\" (UID: \"d87562ec-3481-4932-ad85-b95b983b6a95\") " pod="kube-system/kube-proxy-wqrwm" Sep 9 00:02:15.139270 kubelet[2630]: I0909 00:02:15.139266 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf45b\" (UniqueName: \"kubernetes.io/projected/d87562ec-3481-4932-ad85-b95b983b6a95-kube-api-access-cf45b\") pod \"kube-proxy-wqrwm\" (UID: \"d87562ec-3481-4932-ad85-b95b983b6a95\") " pod="kube-system/kube-proxy-wqrwm" Sep 9 00:02:15.139522 kubelet[2630]: I0909 00:02:15.139288 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d87562ec-3481-4932-ad85-b95b983b6a95-xtables-lock\") pod \"kube-proxy-wqrwm\" (UID: \"d87562ec-3481-4932-ad85-b95b983b6a95\") " pod="kube-system/kube-proxy-wqrwm" Sep 9 00:02:15.287182 systemd[1]: Created slice kubepods-besteffort-pod146c7450_be79_4dac_9fb5_069dc40f342a.slice - libcontainer container kubepods-besteffort-pod146c7450_be79_4dac_9fb5_069dc40f342a.slice. Sep 9 00:02:15.339702 kubelet[2630]: I0909 00:02:15.339640 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/146c7450-be79-4dac-9fb5-069dc40f342a-var-lib-calico\") pod \"tigera-operator-755d956888-vmxlp\" (UID: \"146c7450-be79-4dac-9fb5-069dc40f342a\") " pod="tigera-operator/tigera-operator-755d956888-vmxlp" Sep 9 00:02:15.339702 kubelet[2630]: I0909 00:02:15.339683 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gch8\" (UniqueName: \"kubernetes.io/projected/146c7450-be79-4dac-9fb5-069dc40f342a-kube-api-access-2gch8\") pod \"tigera-operator-755d956888-vmxlp\" (UID: \"146c7450-be79-4dac-9fb5-069dc40f342a\") " pod="tigera-operator/tigera-operator-755d956888-vmxlp" Sep 9 00:02:15.420380 kubelet[2630]: E0909 00:02:15.420333 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:15.421024 containerd[1510]: time="2025-09-09T00:02:15.420969230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wqrwm,Uid:d87562ec-3481-4932-ad85-b95b983b6a95,Namespace:kube-system,Attempt:0,}" Sep 9 00:02:15.443560 kubelet[2630]: E0909 00:02:15.443328 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:15.456601 containerd[1510]: time="2025-09-09T00:02:15.456284609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:02:15.456601 containerd[1510]: time="2025-09-09T00:02:15.456373427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:02:15.456601 containerd[1510]: time="2025-09-09T00:02:15.456387845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:15.456601 containerd[1510]: time="2025-09-09T00:02:15.456514334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:15.494280 systemd[1]: Started cri-containerd-43afac1e89d6ca30d3426fe16b459e04886a56186803c8caf0a09e7d620b84ae.scope - libcontainer container 43afac1e89d6ca30d3426fe16b459e04886a56186803c8caf0a09e7d620b84ae. Sep 9 00:02:15.519210 containerd[1510]: time="2025-09-09T00:02:15.518866305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wqrwm,Uid:d87562ec-3481-4932-ad85-b95b983b6a95,Namespace:kube-system,Attempt:0,} returns sandbox id \"43afac1e89d6ca30d3426fe16b459e04886a56186803c8caf0a09e7d620b84ae\"" Sep 9 00:02:15.520007 kubelet[2630]: E0909 00:02:15.519587 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:15.521742 containerd[1510]: time="2025-09-09T00:02:15.521680560Z" level=info msg="CreateContainer within sandbox \"43afac1e89d6ca30d3426fe16b459e04886a56186803c8caf0a09e7d620b84ae\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:02:15.543814 containerd[1510]: time="2025-09-09T00:02:15.543753710Z" level=info msg="CreateContainer within sandbox \"43afac1e89d6ca30d3426fe16b459e04886a56186803c8caf0a09e7d620b84ae\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a9e749da26c7dabd4b28d3a8fc6fff13afae1584cf188ff397ef003a181d6aa5\"" Sep 9 00:02:15.544504 containerd[1510]: time="2025-09-09T00:02:15.544459977Z" level=info msg="StartContainer for \"a9e749da26c7dabd4b28d3a8fc6fff13afae1584cf188ff397ef003a181d6aa5\"" Sep 9 00:02:15.585266 systemd[1]: Started cri-containerd-a9e749da26c7dabd4b28d3a8fc6fff13afae1584cf188ff397ef003a181d6aa5.scope - libcontainer container a9e749da26c7dabd4b28d3a8fc6fff13afae1584cf188ff397ef003a181d6aa5. Sep 9 00:02:15.592512 containerd[1510]: time="2025-09-09T00:02:15.592472844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-vmxlp,Uid:146c7450-be79-4dac-9fb5-069dc40f342a,Namespace:tigera-operator,Attempt:0,}" Sep 9 00:02:15.621176 containerd[1510]: time="2025-09-09T00:02:15.621128650Z" level=info msg="StartContainer for \"a9e749da26c7dabd4b28d3a8fc6fff13afae1584cf188ff397ef003a181d6aa5\" returns successfully" Sep 9 00:02:15.625703 containerd[1510]: time="2025-09-09T00:02:15.625573441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:02:15.625998 containerd[1510]: time="2025-09-09T00:02:15.625927722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:02:15.625998 containerd[1510]: time="2025-09-09T00:02:15.625951656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:15.626324 containerd[1510]: time="2025-09-09T00:02:15.626249931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:15.649208 systemd[1]: Started cri-containerd-abc21a50347a81ca5052271d58b9434b4bb5a66bdf6f73a6ef449b5ea22df83d.scope - libcontainer container abc21a50347a81ca5052271d58b9434b4bb5a66bdf6f73a6ef449b5ea22df83d. Sep 9 00:02:15.692081 containerd[1510]: time="2025-09-09T00:02:15.691925071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-vmxlp,Uid:146c7450-be79-4dac-9fb5-069dc40f342a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"abc21a50347a81ca5052271d58b9434b4bb5a66bdf6f73a6ef449b5ea22df83d\"" Sep 9 00:02:15.694107 containerd[1510]: time="2025-09-09T00:02:15.694082994Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 00:02:16.445466 kubelet[2630]: E0909 00:02:16.445429 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:16.453485 kubelet[2630]: I0909 00:02:16.453215 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wqrwm" podStartSLOduration=1.4531962250000001 podStartE2EDuration="1.453196225s" podCreationTimestamp="2025-09-09 00:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:02:16.453015123 +0000 UTC m=+7.120585944" watchObservedRunningTime="2025-09-09 00:02:16.453196225 +0000 UTC m=+7.120767046" Sep 9 00:02:17.139163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3067975402.mount: Deactivated successfully. Sep 9 00:02:17.954313 containerd[1510]: time="2025-09-09T00:02:17.954234013Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:17.955151 containerd[1510]: time="2025-09-09T00:02:17.955098528Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 9 00:02:17.956335 containerd[1510]: time="2025-09-09T00:02:17.956289719Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:17.958802 containerd[1510]: time="2025-09-09T00:02:17.958758406Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:17.959672 containerd[1510]: time="2025-09-09T00:02:17.959629423Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.265514297s" Sep 9 00:02:17.959712 containerd[1510]: time="2025-09-09T00:02:17.959669930Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 9 00:02:17.961721 containerd[1510]: time="2025-09-09T00:02:17.961680059Z" level=info msg="CreateContainer within sandbox \"abc21a50347a81ca5052271d58b9434b4bb5a66bdf6f73a6ef449b5ea22df83d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 00:02:17.974925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1141418969.mount: Deactivated successfully. Sep 9 00:02:17.976417 containerd[1510]: time="2025-09-09T00:02:17.976374797Z" level=info msg="CreateContainer within sandbox \"abc21a50347a81ca5052271d58b9434b4bb5a66bdf6f73a6ef449b5ea22df83d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"94a320b2e3f52c78b463bb5d7c1b7f60f36d1f896bff747ad4587c4a2de274c3\"" Sep 9 00:02:17.976852 containerd[1510]: time="2025-09-09T00:02:17.976831190Z" level=info msg="StartContainer for \"94a320b2e3f52c78b463bb5d7c1b7f60f36d1f896bff747ad4587c4a2de274c3\"" Sep 9 00:02:18.014256 systemd[1]: Started cri-containerd-94a320b2e3f52c78b463bb5d7c1b7f60f36d1f896bff747ad4587c4a2de274c3.scope - libcontainer container 94a320b2e3f52c78b463bb5d7c1b7f60f36d1f896bff747ad4587c4a2de274c3. Sep 9 00:02:18.044980 containerd[1510]: time="2025-09-09T00:02:18.044936771Z" level=info msg="StartContainer for \"94a320b2e3f52c78b463bb5d7c1b7f60f36d1f896bff747ad4587c4a2de274c3\" returns successfully" Sep 9 00:02:18.461595 kubelet[2630]: I0909 00:02:18.461073 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-vmxlp" podStartSLOduration=1.193792156 podStartE2EDuration="3.461056564s" podCreationTimestamp="2025-09-09 00:02:15 +0000 UTC" firstStartedPulling="2025-09-09 00:02:15.693094494 +0000 UTC m=+6.360665315" lastFinishedPulling="2025-09-09 00:02:17.960358902 +0000 UTC m=+8.627929723" observedRunningTime="2025-09-09 00:02:18.460918664 +0000 UTC m=+9.128489485" watchObservedRunningTime="2025-09-09 00:02:18.461056564 +0000 UTC m=+9.128627395" Sep 9 00:02:21.462540 kubelet[2630]: E0909 00:02:21.462475 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:22.463099 kubelet[2630]: E0909 00:02:22.463021 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:23.482303 kubelet[2630]: E0909 00:02:23.481704 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:23.576877 sudo[1693]: pam_unix(sudo:session): session closed for user root Sep 9 00:02:23.578912 sshd[1692]: Connection closed by 10.0.0.1 port 43548 Sep 9 00:02:23.579959 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Sep 9 00:02:23.584681 systemd[1]: sshd@6-10.0.0.133:22-10.0.0.1:43548.service: Deactivated successfully. Sep 9 00:02:23.591232 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:02:23.591763 systemd[1]: session-7.scope: Consumed 5.060s CPU time, 215.8M memory peak. Sep 9 00:02:23.596756 systemd-logind[1493]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:02:23.598358 systemd-logind[1493]: Removed session 7. Sep 9 00:02:24.470665 kubelet[2630]: E0909 00:02:24.470617 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:26.250549 systemd[1]: Created slice kubepods-besteffort-pod7c878315_9baf_4471_8399_f13067093270.slice - libcontainer container kubepods-besteffort-pod7c878315_9baf_4471_8399_f13067093270.slice. Sep 9 00:02:26.317351 kubelet[2630]: I0909 00:02:26.317300 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c779l\" (UniqueName: \"kubernetes.io/projected/7c878315-9baf-4471-8399-f13067093270-kube-api-access-c779l\") pod \"calico-typha-5f6665cb59-hhf8r\" (UID: \"7c878315-9baf-4471-8399-f13067093270\") " pod="calico-system/calico-typha-5f6665cb59-hhf8r" Sep 9 00:02:26.320172 kubelet[2630]: I0909 00:02:26.317973 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7c878315-9baf-4471-8399-f13067093270-typha-certs\") pod \"calico-typha-5f6665cb59-hhf8r\" (UID: \"7c878315-9baf-4471-8399-f13067093270\") " pod="calico-system/calico-typha-5f6665cb59-hhf8r" Sep 9 00:02:26.320172 kubelet[2630]: I0909 00:02:26.320103 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c878315-9baf-4471-8399-f13067093270-tigera-ca-bundle\") pod \"calico-typha-5f6665cb59-hhf8r\" (UID: \"7c878315-9baf-4471-8399-f13067093270\") " pod="calico-system/calico-typha-5f6665cb59-hhf8r" Sep 9 00:02:26.479463 systemd[1]: Created slice kubepods-besteffort-podb2b30a2a_9879_4ba8_bd6e_a3f7fc336c33.slice - libcontainer container kubepods-besteffort-podb2b30a2a_9879_4ba8_bd6e_a3f7fc336c33.slice. Sep 9 00:02:26.522112 kubelet[2630]: I0909 00:02:26.521716 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33-node-certs\") pod \"calico-node-m2crl\" (UID: \"b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33\") " pod="calico-system/calico-node-m2crl" Sep 9 00:02:26.522112 kubelet[2630]: I0909 00:02:26.521789 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33-var-lib-calico\") pod \"calico-node-m2crl\" (UID: \"b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33\") " pod="calico-system/calico-node-m2crl" Sep 9 00:02:26.522112 kubelet[2630]: I0909 00:02:26.521814 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33-xtables-lock\") pod \"calico-node-m2crl\" (UID: \"b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33\") " pod="calico-system/calico-node-m2crl" Sep 9 00:02:26.522112 kubelet[2630]: I0909 00:02:26.521838 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33-cni-net-dir\") pod \"calico-node-m2crl\" (UID: \"b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33\") " pod="calico-system/calico-node-m2crl" Sep 9 00:02:26.522112 kubelet[2630]: I0909 00:02:26.521860 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33-cni-bin-dir\") pod \"calico-node-m2crl\" (UID: \"b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33\") " pod="calico-system/calico-node-m2crl" Sep 9 00:02:26.522340 kubelet[2630]: I0909 00:02:26.521918 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33-lib-modules\") pod \"calico-node-m2crl\" (UID: \"b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33\") " pod="calico-system/calico-node-m2crl" Sep 9 00:02:26.522340 kubelet[2630]: I0909 00:02:26.521943 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33-policysync\") pod \"calico-node-m2crl\" (UID: \"b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33\") " pod="calico-system/calico-node-m2crl" Sep 9 00:02:26.522340 kubelet[2630]: I0909 00:02:26.522057 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33-var-run-calico\") pod \"calico-node-m2crl\" (UID: \"b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33\") " pod="calico-system/calico-node-m2crl" Sep 9 00:02:26.522340 kubelet[2630]: I0909 00:02:26.522121 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33-flexvol-driver-host\") pod \"calico-node-m2crl\" (UID: \"b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33\") " pod="calico-system/calico-node-m2crl" Sep 9 00:02:26.522340 kubelet[2630]: I0909 00:02:26.522152 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33-cni-log-dir\") pod \"calico-node-m2crl\" (UID: \"b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33\") " pod="calico-system/calico-node-m2crl" Sep 9 00:02:26.522477 kubelet[2630]: I0909 00:02:26.522172 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33-tigera-ca-bundle\") pod \"calico-node-m2crl\" (UID: \"b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33\") " pod="calico-system/calico-node-m2crl" Sep 9 00:02:26.522477 kubelet[2630]: I0909 00:02:26.522191 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwtz4\" (UniqueName: \"kubernetes.io/projected/b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33-kube-api-access-cwtz4\") pod \"calico-node-m2crl\" (UID: \"b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33\") " pod="calico-system/calico-node-m2crl" Sep 9 00:02:26.555317 kubelet[2630]: E0909 00:02:26.555016 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:26.556174 containerd[1510]: time="2025-09-09T00:02:26.556046966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f6665cb59-hhf8r,Uid:7c878315-9baf-4471-8399-f13067093270,Namespace:calico-system,Attempt:0,}" Sep 9 00:02:26.852692 kubelet[2630]: E0909 00:02:26.848900 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbs8t" podUID="13bd77bc-168d-4e24-bcab-4df0554bc784" Sep 9 00:02:26.871310 containerd[1510]: time="2025-09-09T00:02:26.870637379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:02:26.871310 containerd[1510]: time="2025-09-09T00:02:26.870716939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:02:26.871310 containerd[1510]: time="2025-09-09T00:02:26.870742186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:26.871310 containerd[1510]: time="2025-09-09T00:02:26.870880857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:26.900362 kubelet[2630]: E0909 00:02:26.900327 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.900996 kubelet[2630]: W0909 00:02:26.900686 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.900996 kubelet[2630]: E0909 00:02:26.900755 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.901620 kubelet[2630]: E0909 00:02:26.901284 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.901620 kubelet[2630]: W0909 00:02:26.901299 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.901620 kubelet[2630]: E0909 00:02:26.901312 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.903285 kubelet[2630]: E0909 00:02:26.902874 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.903285 kubelet[2630]: W0909 00:02:26.902896 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.903285 kubelet[2630]: E0909 00:02:26.902916 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.905066 kubelet[2630]: E0909 00:02:26.904863 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.906578 kubelet[2630]: W0909 00:02:26.906087 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.906578 kubelet[2630]: E0909 00:02:26.906118 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.906578 kubelet[2630]: E0909 00:02:26.906463 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.906578 kubelet[2630]: W0909 00:02:26.906475 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.906578 kubelet[2630]: E0909 00:02:26.906486 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.907050 kubelet[2630]: E0909 00:02:26.907022 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.907125 kubelet[2630]: W0909 00:02:26.907112 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.907191 kubelet[2630]: E0909 00:02:26.907179 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.907686 kubelet[2630]: E0909 00:02:26.907672 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.907775 kubelet[2630]: W0909 00:02:26.907761 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.907863 kubelet[2630]: E0909 00:02:26.907849 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.908292 systemd[1]: Started cri-containerd-ecb9f3897f5859e6d25b1537e74c23583c17b901c64552dd3cbabf9bf2d7650d.scope - libcontainer container ecb9f3897f5859e6d25b1537e74c23583c17b901c64552dd3cbabf9bf2d7650d. Sep 9 00:02:26.908531 kubelet[2630]: E0909 00:02:26.908298 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.908531 kubelet[2630]: W0909 00:02:26.908309 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.908531 kubelet[2630]: E0909 00:02:26.908321 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.909258 kubelet[2630]: E0909 00:02:26.908968 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.909258 kubelet[2630]: W0909 00:02:26.909016 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.909258 kubelet[2630]: E0909 00:02:26.909041 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.909988 kubelet[2630]: E0909 00:02:26.909752 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.909988 kubelet[2630]: W0909 00:02:26.909766 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.909988 kubelet[2630]: E0909 00:02:26.909791 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.910531 kubelet[2630]: E0909 00:02:26.910373 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.910531 kubelet[2630]: W0909 00:02:26.910386 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.910531 kubelet[2630]: E0909 00:02:26.910400 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.911213 kubelet[2630]: E0909 00:02:26.910898 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.911213 kubelet[2630]: W0909 00:02:26.910912 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.911213 kubelet[2630]: E0909 00:02:26.910933 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.911671 kubelet[2630]: E0909 00:02:26.911581 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.911671 kubelet[2630]: W0909 00:02:26.911595 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.911671 kubelet[2630]: E0909 00:02:26.911606 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.912210 kubelet[2630]: E0909 00:02:26.912129 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.912210 kubelet[2630]: W0909 00:02:26.912150 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.912489 kubelet[2630]: E0909 00:02:26.912173 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.913089 kubelet[2630]: E0909 00:02:26.912746 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.913089 kubelet[2630]: W0909 00:02:26.912761 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.913089 kubelet[2630]: E0909 00:02:26.912775 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.913290 kubelet[2630]: E0909 00:02:26.913277 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.913466 kubelet[2630]: W0909 00:02:26.913360 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.913466 kubelet[2630]: E0909 00:02:26.913385 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.914878 kubelet[2630]: E0909 00:02:26.914863 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.914978 kubelet[2630]: W0909 00:02:26.914963 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.915075 kubelet[2630]: E0909 00:02:26.915059 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.915605 kubelet[2630]: E0909 00:02:26.915578 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.915902 kubelet[2630]: W0909 00:02:26.915700 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.915902 kubelet[2630]: E0909 00:02:26.915720 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.916384 kubelet[2630]: E0909 00:02:26.916246 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.916384 kubelet[2630]: W0909 00:02:26.916260 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.916987 kubelet[2630]: E0909 00:02:26.916737 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.917545 kubelet[2630]: E0909 00:02:26.917530 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.917647 kubelet[2630]: W0909 00:02:26.917604 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.917647 kubelet[2630]: E0909 00:02:26.917621 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.927177 kubelet[2630]: E0909 00:02:26.926810 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.927177 kubelet[2630]: W0909 00:02:26.926836 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.927177 kubelet[2630]: E0909 00:02:26.926862 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.927177 kubelet[2630]: I0909 00:02:26.926891 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/13bd77bc-168d-4e24-bcab-4df0554bc784-registration-dir\") pod \"csi-node-driver-nbs8t\" (UID: \"13bd77bc-168d-4e24-bcab-4df0554bc784\") " pod="calico-system/csi-node-driver-nbs8t" Sep 9 00:02:26.927501 kubelet[2630]: E0909 00:02:26.927298 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.927501 kubelet[2630]: W0909 00:02:26.927311 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.927501 kubelet[2630]: E0909 00:02:26.927346 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.927501 kubelet[2630]: I0909 00:02:26.927364 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/13bd77bc-168d-4e24-bcab-4df0554bc784-socket-dir\") pod \"csi-node-driver-nbs8t\" (UID: \"13bd77bc-168d-4e24-bcab-4df0554bc784\") " pod="calico-system/csi-node-driver-nbs8t" Sep 9 00:02:26.927774 kubelet[2630]: E0909 00:02:26.927754 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.927830 kubelet[2630]: W0909 00:02:26.927789 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.927830 kubelet[2630]: E0909 00:02:26.927820 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.927908 kubelet[2630]: I0909 00:02:26.927841 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/13bd77bc-168d-4e24-bcab-4df0554bc784-varrun\") pod \"csi-node-driver-nbs8t\" (UID: \"13bd77bc-168d-4e24-bcab-4df0554bc784\") " pod="calico-system/csi-node-driver-nbs8t" Sep 9 00:02:26.928167 kubelet[2630]: E0909 00:02:26.928149 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.928167 kubelet[2630]: W0909 00:02:26.928164 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.928259 kubelet[2630]: E0909 00:02:26.928210 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.928554 kubelet[2630]: E0909 00:02:26.928527 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.928554 kubelet[2630]: W0909 00:02:26.928540 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.928623 kubelet[2630]: E0909 00:02:26.928578 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.928910 kubelet[2630]: E0909 00:02:26.928875 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.928910 kubelet[2630]: W0909 00:02:26.928895 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.928995 kubelet[2630]: E0909 00:02:26.928928 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.929263 kubelet[2630]: E0909 00:02:26.929226 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.929263 kubelet[2630]: W0909 00:02:26.929242 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.929263 kubelet[2630]: E0909 00:02:26.929258 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.929532 kubelet[2630]: E0909 00:02:26.929502 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.929532 kubelet[2630]: W0909 00:02:26.929526 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.929611 kubelet[2630]: E0909 00:02:26.929542 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.929827 kubelet[2630]: E0909 00:02:26.929795 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.929827 kubelet[2630]: W0909 00:02:26.929817 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.929960 kubelet[2630]: E0909 00:02:26.929936 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.931139 kubelet[2630]: E0909 00:02:26.930551 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.931139 kubelet[2630]: W0909 00:02:26.930565 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.931139 kubelet[2630]: E0909 00:02:26.930577 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.931139 kubelet[2630]: I0909 00:02:26.930600 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13bd77bc-168d-4e24-bcab-4df0554bc784-kubelet-dir\") pod \"csi-node-driver-nbs8t\" (UID: \"13bd77bc-168d-4e24-bcab-4df0554bc784\") " pod="calico-system/csi-node-driver-nbs8t" Sep 9 00:02:26.931139 kubelet[2630]: E0909 00:02:26.930912 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.931139 kubelet[2630]: W0909 00:02:26.930925 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.931139 kubelet[2630]: E0909 00:02:26.930946 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.931367 kubelet[2630]: I0909 00:02:26.931159 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmgvw\" (UniqueName: \"kubernetes.io/projected/13bd77bc-168d-4e24-bcab-4df0554bc784-kube-api-access-nmgvw\") pod \"csi-node-driver-nbs8t\" (UID: \"13bd77bc-168d-4e24-bcab-4df0554bc784\") " pod="calico-system/csi-node-driver-nbs8t" Sep 9 00:02:26.933331 kubelet[2630]: E0909 00:02:26.933301 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.933331 kubelet[2630]: W0909 00:02:26.933316 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.933455 kubelet[2630]: E0909 00:02:26.933333 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.933590 kubelet[2630]: E0909 00:02:26.933576 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.933590 kubelet[2630]: W0909 00:02:26.933587 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.933683 kubelet[2630]: E0909 00:02:26.933663 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.933827 kubelet[2630]: E0909 00:02:26.933785 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.933827 kubelet[2630]: W0909 00:02:26.933794 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.933827 kubelet[2630]: E0909 00:02:26.933803 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.934019 kubelet[2630]: E0909 00:02:26.934007 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:26.934019 kubelet[2630]: W0909 00:02:26.934018 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:26.934100 kubelet[2630]: E0909 00:02:26.934027 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:26.960372 containerd[1510]: time="2025-09-09T00:02:26.960327325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f6665cb59-hhf8r,Uid:7c878315-9baf-4471-8399-f13067093270,Namespace:calico-system,Attempt:0,} returns sandbox id \"ecb9f3897f5859e6d25b1537e74c23583c17b901c64552dd3cbabf9bf2d7650d\"" Sep 9 00:02:26.961383 kubelet[2630]: E0909 00:02:26.961361 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:26.962766 containerd[1510]: time="2025-09-09T00:02:26.962740151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 00:02:27.033234 kubelet[2630]: E0909 00:02:27.033180 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.033234 kubelet[2630]: W0909 00:02:27.033208 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.033459 kubelet[2630]: E0909 00:02:27.033254 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.033587 kubelet[2630]: E0909 00:02:27.033562 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.033587 kubelet[2630]: W0909 00:02:27.033577 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.033656 kubelet[2630]: E0909 00:02:27.033592 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.033824 kubelet[2630]: E0909 00:02:27.033796 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.033824 kubelet[2630]: W0909 00:02:27.033806 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.033824 kubelet[2630]: E0909 00:02:27.033819 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.034186 kubelet[2630]: E0909 00:02:27.034128 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.034186 kubelet[2630]: W0909 00:02:27.034163 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.034271 kubelet[2630]: E0909 00:02:27.034200 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.034717 kubelet[2630]: E0909 00:02:27.034683 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.034717 kubelet[2630]: W0909 00:02:27.034704 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.034717 kubelet[2630]: E0909 00:02:27.034720 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.034944 kubelet[2630]: E0909 00:02:27.034915 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.034944 kubelet[2630]: W0909 00:02:27.034927 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.034944 kubelet[2630]: E0909 00:02:27.034938 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.035233 kubelet[2630]: E0909 00:02:27.035206 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.035233 kubelet[2630]: W0909 00:02:27.035215 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.035233 kubelet[2630]: E0909 00:02:27.035228 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.035465 kubelet[2630]: E0909 00:02:27.035446 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.035465 kubelet[2630]: W0909 00:02:27.035455 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.035465 kubelet[2630]: E0909 00:02:27.035468 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.035686 kubelet[2630]: E0909 00:02:27.035663 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.035686 kubelet[2630]: W0909 00:02:27.035677 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.035771 kubelet[2630]: E0909 00:02:27.035691 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.035913 kubelet[2630]: E0909 00:02:27.035892 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.035913 kubelet[2630]: W0909 00:02:27.035904 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.035992 kubelet[2630]: E0909 00:02:27.035919 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.036179 kubelet[2630]: E0909 00:02:27.036163 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.036179 kubelet[2630]: W0909 00:02:27.036173 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.036267 kubelet[2630]: E0909 00:02:27.036189 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.036628 kubelet[2630]: E0909 00:02:27.036407 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.036628 kubelet[2630]: W0909 00:02:27.036430 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.036628 kubelet[2630]: E0909 00:02:27.036443 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.036734 kubelet[2630]: E0909 00:02:27.036667 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.036734 kubelet[2630]: W0909 00:02:27.036678 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.036734 kubelet[2630]: E0909 00:02:27.036708 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.036914 kubelet[2630]: E0909 00:02:27.036896 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.036914 kubelet[2630]: W0909 00:02:27.036908 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.037016 kubelet[2630]: E0909 00:02:27.036981 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.037195 kubelet[2630]: E0909 00:02:27.037128 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.037195 kubelet[2630]: W0909 00:02:27.037142 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.037195 kubelet[2630]: E0909 00:02:27.037189 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.037526 kubelet[2630]: E0909 00:02:27.037508 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.037526 kubelet[2630]: W0909 00:02:27.037523 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.037588 kubelet[2630]: E0909 00:02:27.037539 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.037789 kubelet[2630]: E0909 00:02:27.037771 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.037789 kubelet[2630]: W0909 00:02:27.037784 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.037872 kubelet[2630]: E0909 00:02:27.037803 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.038093 kubelet[2630]: E0909 00:02:27.038078 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.038093 kubelet[2630]: W0909 00:02:27.038092 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.038147 kubelet[2630]: E0909 00:02:27.038119 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.038364 kubelet[2630]: E0909 00:02:27.038352 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.038391 kubelet[2630]: W0909 00:02:27.038363 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.038391 kubelet[2630]: E0909 00:02:27.038376 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.038669 kubelet[2630]: E0909 00:02:27.038645 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.038669 kubelet[2630]: W0909 00:02:27.038657 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.038731 kubelet[2630]: E0909 00:02:27.038674 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.039012 kubelet[2630]: E0909 00:02:27.038985 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.039012 kubelet[2630]: W0909 00:02:27.039001 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.039100 kubelet[2630]: E0909 00:02:27.039018 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.039391 kubelet[2630]: E0909 00:02:27.039362 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.039391 kubelet[2630]: W0909 00:02:27.039376 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.039478 kubelet[2630]: E0909 00:02:27.039394 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.039672 kubelet[2630]: E0909 00:02:27.039653 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.039672 kubelet[2630]: W0909 00:02:27.039667 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.039854 kubelet[2630]: E0909 00:02:27.039683 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.040069 kubelet[2630]: E0909 00:02:27.040027 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.040069 kubelet[2630]: W0909 00:02:27.040067 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.040131 kubelet[2630]: E0909 00:02:27.040098 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.040355 kubelet[2630]: E0909 00:02:27.040324 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.040355 kubelet[2630]: W0909 00:02:27.040350 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.040467 kubelet[2630]: E0909 00:02:27.040363 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.045568 kubelet[2630]: E0909 00:02:27.045528 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:27.045568 kubelet[2630]: W0909 00:02:27.045556 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:27.045664 kubelet[2630]: E0909 00:02:27.045579 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:27.083537 containerd[1510]: time="2025-09-09T00:02:27.083465742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m2crl,Uid:b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33,Namespace:calico-system,Attempt:0,}" Sep 9 00:02:27.113947 containerd[1510]: time="2025-09-09T00:02:27.113578956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:02:27.113947 containerd[1510]: time="2025-09-09T00:02:27.113660640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:02:27.113947 containerd[1510]: time="2025-09-09T00:02:27.113674276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:27.113947 containerd[1510]: time="2025-09-09T00:02:27.113771960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:27.137235 systemd[1]: Started cri-containerd-9f73349518c4abbbffb24efed96c271ee8b5f404850aceac744237540363d6ae.scope - libcontainer container 9f73349518c4abbbffb24efed96c271ee8b5f404850aceac744237540363d6ae. Sep 9 00:02:27.162787 containerd[1510]: time="2025-09-09T00:02:27.162658291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m2crl,Uid:b2b30a2a-9879-4ba8-bd6e-a3f7fc336c33,Namespace:calico-system,Attempt:0,} returns sandbox id \"9f73349518c4abbbffb24efed96c271ee8b5f404850aceac744237540363d6ae\"" Sep 9 00:02:28.422584 kubelet[2630]: E0909 00:02:28.422523 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbs8t" podUID="13bd77bc-168d-4e24-bcab-4df0554bc784" Sep 9 00:02:29.334180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4141237764.mount: Deactivated successfully. Sep 9 00:02:29.778127 containerd[1510]: time="2025-09-09T00:02:29.778065985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:29.779634 containerd[1510]: time="2025-09-09T00:02:29.779541062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 9 00:02:29.781318 containerd[1510]: time="2025-09-09T00:02:29.781283131Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:29.879117 containerd[1510]: time="2025-09-09T00:02:29.879064029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:29.880260 containerd[1510]: time="2025-09-09T00:02:29.880219834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.917444067s" Sep 9 00:02:29.880260 containerd[1510]: time="2025-09-09T00:02:29.880254841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 9 00:02:29.915073 containerd[1510]: time="2025-09-09T00:02:29.915011087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 00:02:29.934122 containerd[1510]: time="2025-09-09T00:02:29.934064875Z" level=info msg="CreateContainer within sandbox \"ecb9f3897f5859e6d25b1537e74c23583c17b901c64552dd3cbabf9bf2d7650d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 00:02:29.952369 containerd[1510]: time="2025-09-09T00:02:29.952302867Z" level=info msg="CreateContainer within sandbox \"ecb9f3897f5859e6d25b1537e74c23583c17b901c64552dd3cbabf9bf2d7650d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2ebc3b782fcde52729e8d7afe8b4203ec5c5e22e7bcbcd45cbb01467012b7357\"" Sep 9 00:02:29.953972 containerd[1510]: time="2025-09-09T00:02:29.953939248Z" level=info msg="StartContainer for \"2ebc3b782fcde52729e8d7afe8b4203ec5c5e22e7bcbcd45cbb01467012b7357\"" Sep 9 00:02:29.987281 systemd[1]: Started cri-containerd-2ebc3b782fcde52729e8d7afe8b4203ec5c5e22e7bcbcd45cbb01467012b7357.scope - libcontainer container 2ebc3b782fcde52729e8d7afe8b4203ec5c5e22e7bcbcd45cbb01467012b7357. Sep 9 00:02:30.361680 containerd[1510]: time="2025-09-09T00:02:30.361637207Z" level=info msg="StartContainer for \"2ebc3b782fcde52729e8d7afe8b4203ec5c5e22e7bcbcd45cbb01467012b7357\" returns successfully" Sep 9 00:02:30.421700 kubelet[2630]: E0909 00:02:30.421628 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbs8t" podUID="13bd77bc-168d-4e24-bcab-4df0554bc784" Sep 9 00:02:30.513710 kubelet[2630]: E0909 00:02:30.513635 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:30.540534 kubelet[2630]: E0909 00:02:30.540480 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.540534 kubelet[2630]: W0909 00:02:30.540519 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.542396 kubelet[2630]: E0909 00:02:30.542364 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.542866 kubelet[2630]: E0909 00:02:30.542651 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.542866 kubelet[2630]: W0909 00:02:30.542666 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.542866 kubelet[2630]: E0909 00:02:30.542677 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.542989 kubelet[2630]: E0909 00:02:30.542965 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.542989 kubelet[2630]: W0909 00:02:30.542986 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.542989 kubelet[2630]: E0909 00:02:30.542998 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.543310 kubelet[2630]: E0909 00:02:30.543292 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.543310 kubelet[2630]: W0909 00:02:30.543307 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.543387 kubelet[2630]: E0909 00:02:30.543319 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.543591 kubelet[2630]: E0909 00:02:30.543575 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.543591 kubelet[2630]: W0909 00:02:30.543589 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.543651 kubelet[2630]: E0909 00:02:30.543600 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.543932 kubelet[2630]: E0909 00:02:30.543906 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.543932 kubelet[2630]: W0909 00:02:30.543921 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.543994 kubelet[2630]: E0909 00:02:30.543933 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.544419 kubelet[2630]: E0909 00:02:30.544237 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.544419 kubelet[2630]: W0909 00:02:30.544260 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.544419 kubelet[2630]: E0909 00:02:30.544285 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.544635 kubelet[2630]: E0909 00:02:30.544621 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.544711 kubelet[2630]: W0909 00:02:30.544700 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.544794 kubelet[2630]: E0909 00:02:30.544781 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.545154 kubelet[2630]: E0909 00:02:30.545107 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.545154 kubelet[2630]: W0909 00:02:30.545118 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.545154 kubelet[2630]: E0909 00:02:30.545127 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.545538 kubelet[2630]: E0909 00:02:30.545523 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.545538 kubelet[2630]: W0909 00:02:30.545534 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.545945 kubelet[2630]: E0909 00:02:30.545543 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.545945 kubelet[2630]: E0909 00:02:30.545728 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.545945 kubelet[2630]: W0909 00:02:30.545735 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.545945 kubelet[2630]: E0909 00:02:30.545743 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.545945 kubelet[2630]: E0909 00:02:30.545918 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.545945 kubelet[2630]: W0909 00:02:30.545925 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.545945 kubelet[2630]: E0909 00:02:30.545932 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.546202 kubelet[2630]: E0909 00:02:30.546138 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.546202 kubelet[2630]: W0909 00:02:30.546145 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.546202 kubelet[2630]: E0909 00:02:30.546153 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.546330 kubelet[2630]: E0909 00:02:30.546318 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.546330 kubelet[2630]: W0909 00:02:30.546326 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.549087 kubelet[2630]: E0909 00:02:30.546334 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.551308 kubelet[2630]: E0909 00:02:30.551265 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.551357 kubelet[2630]: W0909 00:02:30.551306 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.551357 kubelet[2630]: E0909 00:02:30.551338 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.560283 kubelet[2630]: E0909 00:02:30.560243 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.560283 kubelet[2630]: W0909 00:02:30.560272 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.560283 kubelet[2630]: E0909 00:02:30.560294 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.560560 kubelet[2630]: E0909 00:02:30.560537 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.560560 kubelet[2630]: W0909 00:02:30.560545 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.560560 kubelet[2630]: E0909 00:02:30.560554 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.562293 kubelet[2630]: E0909 00:02:30.562276 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.562293 kubelet[2630]: W0909 00:02:30.562290 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.562381 kubelet[2630]: E0909 00:02:30.562313 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.562687 kubelet[2630]: E0909 00:02:30.562669 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.562687 kubelet[2630]: W0909 00:02:30.562683 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.562790 kubelet[2630]: E0909 00:02:30.562770 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.562940 kubelet[2630]: E0909 00:02:30.562912 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.562940 kubelet[2630]: W0909 00:02:30.562924 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.563039 kubelet[2630]: E0909 00:02:30.562978 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.563162 kubelet[2630]: E0909 00:02:30.563147 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.563162 kubelet[2630]: W0909 00:02:30.563158 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.564095 kubelet[2630]: E0909 00:02:30.564074 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.564306 kubelet[2630]: E0909 00:02:30.564289 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.564306 kubelet[2630]: W0909 00:02:30.564303 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.564376 kubelet[2630]: E0909 00:02:30.564317 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.566297 kubelet[2630]: E0909 00:02:30.566274 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.566297 kubelet[2630]: W0909 00:02:30.566289 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.566426 kubelet[2630]: E0909 00:02:30.566407 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.569136 kubelet[2630]: E0909 00:02:30.566544 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.569136 kubelet[2630]: W0909 00:02:30.566553 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.569136 kubelet[2630]: E0909 00:02:30.566624 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.569136 kubelet[2630]: E0909 00:02:30.566748 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.569136 kubelet[2630]: W0909 00:02:30.566756 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.569136 kubelet[2630]: E0909 00:02:30.566868 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.569136 kubelet[2630]: E0909 00:02:30.568168 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.569136 kubelet[2630]: W0909 00:02:30.568177 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.569136 kubelet[2630]: E0909 00:02:30.568197 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.569411 kubelet[2630]: E0909 00:02:30.569336 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.569411 kubelet[2630]: W0909 00:02:30.569355 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.570081 kubelet[2630]: E0909 00:02:30.570063 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.570299 kubelet[2630]: E0909 00:02:30.570283 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.570299 kubelet[2630]: W0909 00:02:30.570295 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.570379 kubelet[2630]: E0909 00:02:30.570338 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.571271 kubelet[2630]: E0909 00:02:30.571257 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.571271 kubelet[2630]: W0909 00:02:30.571268 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.571343 kubelet[2630]: E0909 00:02:30.571297 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.574290 kubelet[2630]: E0909 00:02:30.574270 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.574290 kubelet[2630]: W0909 00:02:30.574286 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.574441 kubelet[2630]: E0909 00:02:30.574362 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.575552 kubelet[2630]: E0909 00:02:30.575532 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.575625 kubelet[2630]: W0909 00:02:30.575556 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.575625 kubelet[2630]: E0909 00:02:30.575567 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.580069 kubelet[2630]: E0909 00:02:30.578381 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.580069 kubelet[2630]: W0909 00:02:30.578395 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.580069 kubelet[2630]: E0909 00:02:30.578514 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:30.580069 kubelet[2630]: E0909 00:02:30.578768 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:02:30.580069 kubelet[2630]: W0909 00:02:30.578777 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:02:30.580069 kubelet[2630]: E0909 00:02:30.578790 2630 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:02:31.374628 containerd[1510]: time="2025-09-09T00:02:31.374573946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:31.375442 containerd[1510]: time="2025-09-09T00:02:31.375406082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 9 00:02:31.376612 containerd[1510]: time="2025-09-09T00:02:31.376562257Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:31.379075 containerd[1510]: time="2025-09-09T00:02:31.379000555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:31.380343 containerd[1510]: time="2025-09-09T00:02:31.380300511Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.46522387s" Sep 9 00:02:31.380422 containerd[1510]: time="2025-09-09T00:02:31.380349433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 9 00:02:31.382344 containerd[1510]: time="2025-09-09T00:02:31.382306907Z" level=info msg="CreateContainer within sandbox \"9f73349518c4abbbffb24efed96c271ee8b5f404850aceac744237540363d6ae\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 00:02:31.397452 containerd[1510]: time="2025-09-09T00:02:31.397404285Z" level=info msg="CreateContainer within sandbox \"9f73349518c4abbbffb24efed96c271ee8b5f404850aceac744237540363d6ae\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"27ac0630330afce860702f45049751fc7f737340bc77aa9e2a77ccf16b75b32d\"" Sep 9 00:02:31.397762 containerd[1510]: time="2025-09-09T00:02:31.397741239Z" level=info msg="StartContainer for \"27ac0630330afce860702f45049751fc7f737340bc77aa9e2a77ccf16b75b32d\"" Sep 9 00:02:31.435170 systemd[1]: Started cri-containerd-27ac0630330afce860702f45049751fc7f737340bc77aa9e2a77ccf16b75b32d.scope - libcontainer container 27ac0630330afce860702f45049751fc7f737340bc77aa9e2a77ccf16b75b32d. Sep 9 00:02:31.471173 containerd[1510]: time="2025-09-09T00:02:31.471127905Z" level=info msg="StartContainer for \"27ac0630330afce860702f45049751fc7f737340bc77aa9e2a77ccf16b75b32d\" returns successfully" Sep 9 00:02:31.483717 systemd[1]: cri-containerd-27ac0630330afce860702f45049751fc7f737340bc77aa9e2a77ccf16b75b32d.scope: Deactivated successfully. Sep 9 00:02:31.505024 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27ac0630330afce860702f45049751fc7f737340bc77aa9e2a77ccf16b75b32d-rootfs.mount: Deactivated successfully. Sep 9 00:02:31.517293 kubelet[2630]: I0909 00:02:31.517251 2630 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:02:31.517901 kubelet[2630]: E0909 00:02:31.517693 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:31.565340 containerd[1510]: time="2025-09-09T00:02:31.565252904Z" level=info msg="shim disconnected" id=27ac0630330afce860702f45049751fc7f737340bc77aa9e2a77ccf16b75b32d namespace=k8s.io Sep 9 00:02:31.565837 containerd[1510]: time="2025-09-09T00:02:31.565616288Z" level=warning msg="cleaning up after shim disconnected" id=27ac0630330afce860702f45049751fc7f737340bc77aa9e2a77ccf16b75b32d namespace=k8s.io Sep 9 00:02:31.565837 containerd[1510]: time="2025-09-09T00:02:31.565640464Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:02:31.569164 kubelet[2630]: I0909 00:02:31.569085 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5f6665cb59-hhf8r" podStartSLOduration=2.616726433 podStartE2EDuration="5.569025323s" podCreationTimestamp="2025-09-09 00:02:26 +0000 UTC" firstStartedPulling="2025-09-09 00:02:26.96248031 +0000 UTC m=+17.630051132" lastFinishedPulling="2025-09-09 00:02:29.914779201 +0000 UTC m=+20.582350022" observedRunningTime="2025-09-09 00:02:30.533311753 +0000 UTC m=+21.200882584" watchObservedRunningTime="2025-09-09 00:02:31.569025323 +0000 UTC m=+22.236619357" Sep 9 00:02:32.422388 kubelet[2630]: E0909 00:02:32.422298 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbs8t" podUID="13bd77bc-168d-4e24-bcab-4df0554bc784" Sep 9 00:02:32.520507 containerd[1510]: time="2025-09-09T00:02:32.520470101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 00:02:34.422680 kubelet[2630]: E0909 00:02:34.422595 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbs8t" podUID="13bd77bc-168d-4e24-bcab-4df0554bc784" Sep 9 00:02:36.413404 containerd[1510]: time="2025-09-09T00:02:36.412918230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:36.413885 containerd[1510]: time="2025-09-09T00:02:36.413697976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 9 00:02:36.414943 containerd[1510]: time="2025-09-09T00:02:36.414914503Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:36.417382 containerd[1510]: time="2025-09-09T00:02:36.417342548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:36.418125 containerd[1510]: time="2025-09-09T00:02:36.418095083Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.897588864s" Sep 9 00:02:36.418161 containerd[1510]: time="2025-09-09T00:02:36.418126231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 9 00:02:36.420536 containerd[1510]: time="2025-09-09T00:02:36.420496096Z" level=info msg="CreateContainer within sandbox \"9f73349518c4abbbffb24efed96c271ee8b5f404850aceac744237540363d6ae\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 00:02:36.421754 kubelet[2630]: E0909 00:02:36.421721 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbs8t" podUID="13bd77bc-168d-4e24-bcab-4df0554bc784" Sep 9 00:02:36.439196 containerd[1510]: time="2025-09-09T00:02:36.439144976Z" level=info msg="CreateContainer within sandbox \"9f73349518c4abbbffb24efed96c271ee8b5f404850aceac744237540363d6ae\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d1ffd997003f14d1dd5f029e0af45583a31e486ab39f30afb750c72f5a29f234\"" Sep 9 00:02:36.439759 containerd[1510]: time="2025-09-09T00:02:36.439711622Z" level=info msg="StartContainer for \"d1ffd997003f14d1dd5f029e0af45583a31e486ab39f30afb750c72f5a29f234\"" Sep 9 00:02:36.473253 systemd[1]: Started cri-containerd-d1ffd997003f14d1dd5f029e0af45583a31e486ab39f30afb750c72f5a29f234.scope - libcontainer container d1ffd997003f14d1dd5f029e0af45583a31e486ab39f30afb750c72f5a29f234. Sep 9 00:02:36.590626 containerd[1510]: time="2025-09-09T00:02:36.588743791Z" level=info msg="StartContainer for \"d1ffd997003f14d1dd5f029e0af45583a31e486ab39f30afb750c72f5a29f234\" returns successfully" Sep 9 00:02:38.421676 kubelet[2630]: E0909 00:02:38.421612 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbs8t" podUID="13bd77bc-168d-4e24-bcab-4df0554bc784" Sep 9 00:02:38.946747 containerd[1510]: time="2025-09-09T00:02:38.946660930Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:02:38.950921 systemd[1]: cri-containerd-d1ffd997003f14d1dd5f029e0af45583a31e486ab39f30afb750c72f5a29f234.scope: Deactivated successfully. Sep 9 00:02:38.951713 systemd[1]: cri-containerd-d1ffd997003f14d1dd5f029e0af45583a31e486ab39f30afb750c72f5a29f234.scope: Consumed 635ms CPU time, 181.8M memory peak, 3.1M read from disk, 171.3M written to disk. Sep 9 00:02:38.973684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1ffd997003f14d1dd5f029e0af45583a31e486ab39f30afb750c72f5a29f234-rootfs.mount: Deactivated successfully. Sep 9 00:02:38.978846 kubelet[2630]: I0909 00:02:38.978817 2630 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 00:02:39.023122 kubelet[2630]: I0909 00:02:39.023078 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs2bb\" (UniqueName: \"kubernetes.io/projected/8b3fc743-df1d-4d9a-822b-01f3200e3e51-kube-api-access-cs2bb\") pod \"coredns-668d6bf9bc-rm7mk\" (UID: \"8b3fc743-df1d-4d9a-822b-01f3200e3e51\") " pod="kube-system/coredns-668d6bf9bc-rm7mk" Sep 9 00:02:39.023407 kubelet[2630]: I0909 00:02:39.023129 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gckm6\" (UniqueName: \"kubernetes.io/projected/abf21d55-a6e7-4fd1-ad4c-e82f7525f680-kube-api-access-gckm6\") pod \"calico-apiserver-79646b996b-cw46r\" (UID: \"abf21d55-a6e7-4fd1-ad4c-e82f7525f680\") " pod="calico-apiserver/calico-apiserver-79646b996b-cw46r" Sep 9 00:02:39.023407 kubelet[2630]: I0909 00:02:39.023147 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/abf21d55-a6e7-4fd1-ad4c-e82f7525f680-calico-apiserver-certs\") pod \"calico-apiserver-79646b996b-cw46r\" (UID: \"abf21d55-a6e7-4fd1-ad4c-e82f7525f680\") " pod="calico-apiserver/calico-apiserver-79646b996b-cw46r" Sep 9 00:02:39.023407 kubelet[2630]: I0909 00:02:39.023174 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b3fc743-df1d-4d9a-822b-01f3200e3e51-config-volume\") pod \"coredns-668d6bf9bc-rm7mk\" (UID: \"8b3fc743-df1d-4d9a-822b-01f3200e3e51\") " pod="kube-system/coredns-668d6bf9bc-rm7mk" Sep 9 00:02:39.025487 systemd[1]: Created slice kubepods-besteffort-podabf21d55_a6e7_4fd1_ad4c_e82f7525f680.slice - libcontainer container kubepods-besteffort-podabf21d55_a6e7_4fd1_ad4c_e82f7525f680.slice. Sep 9 00:02:39.032181 systemd[1]: Created slice kubepods-besteffort-pod2b12fb94_9dbc_4031_90bc_b634460d49b8.slice - libcontainer container kubepods-besteffort-pod2b12fb94_9dbc_4031_90bc_b634460d49b8.slice. Sep 9 00:02:39.037592 systemd[1]: Created slice kubepods-besteffort-pod57b1360c_3eda_441d_822b_cfab485ba025.slice - libcontainer container kubepods-besteffort-pod57b1360c_3eda_441d_822b_cfab485ba025.slice. Sep 9 00:02:39.041443 systemd[1]: Created slice kubepods-burstable-pod6de14301_f214_422d_9e12_0b69107cbf97.slice - libcontainer container kubepods-burstable-pod6de14301_f214_422d_9e12_0b69107cbf97.slice. Sep 9 00:02:39.047310 systemd[1]: Created slice kubepods-burstable-pod8b3fc743_df1d_4d9a_822b_01f3200e3e51.slice - libcontainer container kubepods-burstable-pod8b3fc743_df1d_4d9a_822b_01f3200e3e51.slice. Sep 9 00:02:39.052113 systemd[1]: Created slice kubepods-besteffort-pod9d38a1be_6323_41e6_8564_b477a0eb94a8.slice - libcontainer container kubepods-besteffort-pod9d38a1be_6323_41e6_8564_b477a0eb94a8.slice. Sep 9 00:02:39.058089 systemd[1]: Created slice kubepods-besteffort-podfe12ff7b_73e6_42d0_a348_29ad8070fac9.slice - libcontainer container kubepods-besteffort-podfe12ff7b_73e6_42d0_a348_29ad8070fac9.slice. Sep 9 00:02:39.124257 kubelet[2630]: I0909 00:02:39.124222 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fe12ff7b-73e6-42d0-a348-29ad8070fac9-whisker-backend-key-pair\") pod \"whisker-b854f49bb-nlfqw\" (UID: \"fe12ff7b-73e6-42d0-a348-29ad8070fac9\") " pod="calico-system/whisker-b854f49bb-nlfqw" Sep 9 00:02:39.124257 kubelet[2630]: I0909 00:02:39.124259 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwlcs\" (UniqueName: \"kubernetes.io/projected/fe12ff7b-73e6-42d0-a348-29ad8070fac9-kube-api-access-kwlcs\") pod \"whisker-b854f49bb-nlfqw\" (UID: \"fe12ff7b-73e6-42d0-a348-29ad8070fac9\") " pod="calico-system/whisker-b854f49bb-nlfqw" Sep 9 00:02:39.124413 kubelet[2630]: I0909 00:02:39.124280 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe12ff7b-73e6-42d0-a348-29ad8070fac9-whisker-ca-bundle\") pod \"whisker-b854f49bb-nlfqw\" (UID: \"fe12ff7b-73e6-42d0-a348-29ad8070fac9\") " pod="calico-system/whisker-b854f49bb-nlfqw" Sep 9 00:02:39.124413 kubelet[2630]: I0909 00:02:39.124310 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg7f2\" (UniqueName: \"kubernetes.io/projected/2b12fb94-9dbc-4031-90bc-b634460d49b8-kube-api-access-tg7f2\") pod \"calico-apiserver-79646b996b-6z92f\" (UID: \"2b12fb94-9dbc-4031-90bc-b634460d49b8\") " pod="calico-apiserver/calico-apiserver-79646b996b-6z92f" Sep 9 00:02:39.124413 kubelet[2630]: I0909 00:02:39.124327 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86j8k\" (UniqueName: \"kubernetes.io/projected/6de14301-f214-422d-9e12-0b69107cbf97-kube-api-access-86j8k\") pod \"coredns-668d6bf9bc-4p4qb\" (UID: \"6de14301-f214-422d-9e12-0b69107cbf97\") " pod="kube-system/coredns-668d6bf9bc-4p4qb" Sep 9 00:02:39.124413 kubelet[2630]: I0909 00:02:39.124344 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9d38a1be-6323-41e6-8564-b477a0eb94a8-goldmane-key-pair\") pod \"goldmane-54d579b49d-ct9bv\" (UID: \"9d38a1be-6323-41e6-8564-b477a0eb94a8\") " pod="calico-system/goldmane-54d579b49d-ct9bv" Sep 9 00:02:39.124413 kubelet[2630]: I0909 00:02:39.124372 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6de14301-f214-422d-9e12-0b69107cbf97-config-volume\") pod \"coredns-668d6bf9bc-4p4qb\" (UID: \"6de14301-f214-422d-9e12-0b69107cbf97\") " pod="kube-system/coredns-668d6bf9bc-4p4qb" Sep 9 00:02:39.124535 kubelet[2630]: I0909 00:02:39.124388 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57b1360c-3eda-441d-822b-cfab485ba025-tigera-ca-bundle\") pod \"calico-kube-controllers-68c5d8b85b-fq8gn\" (UID: \"57b1360c-3eda-441d-822b-cfab485ba025\") " pod="calico-system/calico-kube-controllers-68c5d8b85b-fq8gn" Sep 9 00:02:39.124535 kubelet[2630]: I0909 00:02:39.124409 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d38a1be-6323-41e6-8564-b477a0eb94a8-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-ct9bv\" (UID: \"9d38a1be-6323-41e6-8564-b477a0eb94a8\") " pod="calico-system/goldmane-54d579b49d-ct9bv" Sep 9 00:02:39.124535 kubelet[2630]: I0909 00:02:39.124427 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2b12fb94-9dbc-4031-90bc-b634460d49b8-calico-apiserver-certs\") pod \"calico-apiserver-79646b996b-6z92f\" (UID: \"2b12fb94-9dbc-4031-90bc-b634460d49b8\") " pod="calico-apiserver/calico-apiserver-79646b996b-6z92f" Sep 9 00:02:39.124535 kubelet[2630]: I0909 00:02:39.124443 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klmq2\" (UniqueName: \"kubernetes.io/projected/57b1360c-3eda-441d-822b-cfab485ba025-kube-api-access-klmq2\") pod \"calico-kube-controllers-68c5d8b85b-fq8gn\" (UID: \"57b1360c-3eda-441d-822b-cfab485ba025\") " pod="calico-system/calico-kube-controllers-68c5d8b85b-fq8gn" Sep 9 00:02:39.124535 kubelet[2630]: I0909 00:02:39.124459 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d38a1be-6323-41e6-8564-b477a0eb94a8-config\") pod \"goldmane-54d579b49d-ct9bv\" (UID: \"9d38a1be-6323-41e6-8564-b477a0eb94a8\") " pod="calico-system/goldmane-54d579b49d-ct9bv" Sep 9 00:02:39.124652 kubelet[2630]: I0909 00:02:39.124473 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hdw2\" (UniqueName: \"kubernetes.io/projected/9d38a1be-6323-41e6-8564-b477a0eb94a8-kube-api-access-7hdw2\") pod \"goldmane-54d579b49d-ct9bv\" (UID: \"9d38a1be-6323-41e6-8564-b477a0eb94a8\") " pod="calico-system/goldmane-54d579b49d-ct9bv" Sep 9 00:02:39.327746 containerd[1510]: time="2025-09-09T00:02:39.327593720Z" level=info msg="shim disconnected" id=d1ffd997003f14d1dd5f029e0af45583a31e486ab39f30afb750c72f5a29f234 namespace=k8s.io Sep 9 00:02:39.327746 containerd[1510]: time="2025-09-09T00:02:39.327661888Z" level=warning msg="cleaning up after shim disconnected" id=d1ffd997003f14d1dd5f029e0af45583a31e486ab39f30afb750c72f5a29f234 namespace=k8s.io Sep 9 00:02:39.327746 containerd[1510]: time="2025-09-09T00:02:39.327672698Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:02:39.330265 containerd[1510]: time="2025-09-09T00:02:39.329912397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79646b996b-cw46r,Uid:abf21d55-a6e7-4fd1-ad4c-e82f7525f680,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:02:39.335714 containerd[1510]: time="2025-09-09T00:02:39.335674405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79646b996b-6z92f,Uid:2b12fb94-9dbc-4031-90bc-b634460d49b8,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:02:39.340818 containerd[1510]: time="2025-09-09T00:02:39.340601284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c5d8b85b-fq8gn,Uid:57b1360c-3eda-441d-822b-cfab485ba025,Namespace:calico-system,Attempt:0,}" Sep 9 00:02:39.345886 kubelet[2630]: E0909 00:02:39.345855 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:39.346571 containerd[1510]: time="2025-09-09T00:02:39.346518845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4p4qb,Uid:6de14301-f214-422d-9e12-0b69107cbf97,Namespace:kube-system,Attempt:0,}" Sep 9 00:02:39.350278 kubelet[2630]: E0909 00:02:39.350195 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:39.351288 containerd[1510]: time="2025-09-09T00:02:39.350969940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rm7mk,Uid:8b3fc743-df1d-4d9a-822b-01f3200e3e51,Namespace:kube-system,Attempt:0,}" Sep 9 00:02:39.355382 containerd[1510]: time="2025-09-09T00:02:39.355330404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ct9bv,Uid:9d38a1be-6323-41e6-8564-b477a0eb94a8,Namespace:calico-system,Attempt:0,}" Sep 9 00:02:39.361553 containerd[1510]: time="2025-09-09T00:02:39.361478458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b854f49bb-nlfqw,Uid:fe12ff7b-73e6-42d0-a348-29ad8070fac9,Namespace:calico-system,Attempt:0,}" Sep 9 00:02:39.595092 containerd[1510]: time="2025-09-09T00:02:39.594773927Z" level=error msg="Failed to destroy network for sandbox \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.599892 containerd[1510]: time="2025-09-09T00:02:39.599829197Z" level=error msg="encountered an error cleaning up failed sandbox \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.600023 containerd[1510]: time="2025-09-09T00:02:39.599934506Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79646b996b-cw46r,Uid:abf21d55-a6e7-4fd1-ad4c-e82f7525f680,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.603013 containerd[1510]: time="2025-09-09T00:02:39.602497252Z" level=error msg="Failed to destroy network for sandbox \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.604021 containerd[1510]: time="2025-09-09T00:02:39.603983995Z" level=error msg="Failed to destroy network for sandbox \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.604432 containerd[1510]: time="2025-09-09T00:02:39.604400708Z" level=error msg="encountered an error cleaning up failed sandbox \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.604488 containerd[1510]: time="2025-09-09T00:02:39.604472514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79646b996b-6z92f,Uid:2b12fb94-9dbc-4031-90bc-b634460d49b8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.604701 containerd[1510]: time="2025-09-09T00:02:39.604677027Z" level=error msg="encountered an error cleaning up failed sandbox \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.605099 containerd[1510]: time="2025-09-09T00:02:39.605076398Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ct9bv,Uid:9d38a1be-6323-41e6-8564-b477a0eb94a8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.606050 containerd[1510]: time="2025-09-09T00:02:39.605990346Z" level=error msg="Failed to destroy network for sandbox \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.606389 containerd[1510]: time="2025-09-09T00:02:39.606344651Z" level=error msg="encountered an error cleaning up failed sandbox \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.606389 containerd[1510]: time="2025-09-09T00:02:39.606380829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b854f49bb-nlfqw,Uid:fe12ff7b-73e6-42d0-a348-29ad8070fac9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.608284 containerd[1510]: time="2025-09-09T00:02:39.608200229Z" level=error msg="Failed to destroy network for sandbox \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.609054 containerd[1510]: time="2025-09-09T00:02:39.609006754Z" level=error msg="encountered an error cleaning up failed sandbox \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.609240 kubelet[2630]: E0909 00:02:39.609176 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.609699 containerd[1510]: time="2025-09-09T00:02:39.609198604Z" level=error msg="Failed to destroy network for sandbox \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.609735 kubelet[2630]: E0909 00:02:39.609240 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79646b996b-6z92f" Sep 9 00:02:39.609735 kubelet[2630]: E0909 00:02:39.609264 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79646b996b-6z92f" Sep 9 00:02:39.609735 kubelet[2630]: E0909 00:02:39.609308 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79646b996b-6z92f_calico-apiserver(2b12fb94-9dbc-4031-90bc-b634460d49b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79646b996b-6z92f_calico-apiserver(2b12fb94-9dbc-4031-90bc-b634460d49b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79646b996b-6z92f" podUID="2b12fb94-9dbc-4031-90bc-b634460d49b8" Sep 9 00:02:39.609861 kubelet[2630]: E0909 00:02:39.609321 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.609861 kubelet[2630]: E0909 00:02:39.609624 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-ct9bv" Sep 9 00:02:39.609861 kubelet[2630]: E0909 00:02:39.609639 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-ct9bv" Sep 9 00:02:39.609945 kubelet[2630]: E0909 00:02:39.609662 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-ct9bv_calico-system(9d38a1be-6323-41e6-8564-b477a0eb94a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-ct9bv_calico-system(9d38a1be-6323-41e6-8564-b477a0eb94a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-ct9bv" podUID="9d38a1be-6323-41e6-8564-b477a0eb94a8" Sep 9 00:02:39.609945 kubelet[2630]: E0909 00:02:39.609348 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.609945 kubelet[2630]: E0909 00:02:39.609686 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79646b996b-cw46r" Sep 9 00:02:39.610052 kubelet[2630]: E0909 00:02:39.609699 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79646b996b-cw46r" Sep 9 00:02:39.610052 kubelet[2630]: E0909 00:02:39.609719 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79646b996b-cw46r_calico-apiserver(abf21d55-a6e7-4fd1-ad4c-e82f7525f680)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79646b996b-cw46r_calico-apiserver(abf21d55-a6e7-4fd1-ad4c-e82f7525f680)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79646b996b-cw46r" podUID="abf21d55-a6e7-4fd1-ad4c-e82f7525f680" Sep 9 00:02:39.610052 kubelet[2630]: E0909 00:02:39.609580 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.610166 kubelet[2630]: E0909 00:02:39.609743 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b854f49bb-nlfqw" Sep 9 00:02:39.610166 kubelet[2630]: E0909 00:02:39.609754 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b854f49bb-nlfqw" Sep 9 00:02:39.610166 kubelet[2630]: E0909 00:02:39.609785 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b854f49bb-nlfqw_calico-system(fe12ff7b-73e6-42d0-a348-29ad8070fac9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b854f49bb-nlfqw_calico-system(fe12ff7b-73e6-42d0-a348-29ad8070fac9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b854f49bb-nlfqw" podUID="fe12ff7b-73e6-42d0-a348-29ad8070fac9" Sep 9 00:02:39.610263 containerd[1510]: time="2025-09-09T00:02:39.610239631Z" level=error msg="Failed to destroy network for sandbox \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.610469 containerd[1510]: time="2025-09-09T00:02:39.610442372Z" level=error msg="encountered an error cleaning up failed sandbox \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.610504 containerd[1510]: time="2025-09-09T00:02:39.610486335Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rm7mk,Uid:8b3fc743-df1d-4d9a-822b-01f3200e3e51,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.610585 containerd[1510]: time="2025-09-09T00:02:39.610561977Z" level=error msg="encountered an error cleaning up failed sandbox \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.610735 kubelet[2630]: E0909 00:02:39.610711 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.610784 kubelet[2630]: E0909 00:02:39.610742 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rm7mk" Sep 9 00:02:39.610784 kubelet[2630]: E0909 00:02:39.610758 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rm7mk" Sep 9 00:02:39.610833 kubelet[2630]: E0909 00:02:39.610784 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rm7mk_kube-system(8b3fc743-df1d-4d9a-822b-01f3200e3e51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rm7mk_kube-system(8b3fc743-df1d-4d9a-822b-01f3200e3e51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rm7mk" podUID="8b3fc743-df1d-4d9a-822b-01f3200e3e51" Sep 9 00:02:39.610884 containerd[1510]: time="2025-09-09T00:02:39.610604176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4p4qb,Uid:6de14301-f214-422d-9e12-0b69107cbf97,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.610930 kubelet[2630]: E0909 00:02:39.610907 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.610971 kubelet[2630]: E0909 00:02:39.610933 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4p4qb" Sep 9 00:02:39.610971 kubelet[2630]: E0909 00:02:39.610948 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4p4qb" Sep 9 00:02:39.611047 kubelet[2630]: E0909 00:02:39.610969 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4p4qb_kube-system(6de14301-f214-422d-9e12-0b69107cbf97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4p4qb_kube-system(6de14301-f214-422d-9e12-0b69107cbf97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4p4qb" podUID="6de14301-f214-422d-9e12-0b69107cbf97" Sep 9 00:02:39.611576 containerd[1510]: time="2025-09-09T00:02:39.611549303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c5d8b85b-fq8gn,Uid:57b1360c-3eda-441d-822b-cfab485ba025,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.611718 kubelet[2630]: E0909 00:02:39.611673 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:39.611718 kubelet[2630]: E0909 00:02:39.611707 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68c5d8b85b-fq8gn" Sep 9 00:02:39.611810 kubelet[2630]: E0909 00:02:39.611721 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68c5d8b85b-fq8gn" Sep 9 00:02:39.611810 kubelet[2630]: E0909 00:02:39.611749 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68c5d8b85b-fq8gn_calico-system(57b1360c-3eda-441d-822b-cfab485ba025)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68c5d8b85b-fq8gn_calico-system(57b1360c-3eda-441d-822b-cfab485ba025)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68c5d8b85b-fq8gn" podUID="57b1360c-3eda-441d-822b-cfab485ba025" Sep 9 00:02:39.623067 containerd[1510]: time="2025-09-09T00:02:39.622814593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 00:02:40.428519 systemd[1]: Created slice kubepods-besteffort-pod13bd77bc_168d_4e24_bcab_4df0554bc784.slice - libcontainer container kubepods-besteffort-pod13bd77bc_168d_4e24_bcab_4df0554bc784.slice. Sep 9 00:02:40.431477 containerd[1510]: time="2025-09-09T00:02:40.431439704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbs8t,Uid:13bd77bc-168d-4e24-bcab-4df0554bc784,Namespace:calico-system,Attempt:0,}" Sep 9 00:02:40.493883 containerd[1510]: time="2025-09-09T00:02:40.493819057Z" level=error msg="Failed to destroy network for sandbox \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.494315 containerd[1510]: time="2025-09-09T00:02:40.494276907Z" level=error msg="encountered an error cleaning up failed sandbox \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.494370 containerd[1510]: time="2025-09-09T00:02:40.494339183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbs8t,Uid:13bd77bc-168d-4e24-bcab-4df0554bc784,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.494645 kubelet[2630]: E0909 00:02:40.494577 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.494645 kubelet[2630]: E0909 00:02:40.494656 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nbs8t" Sep 9 00:02:40.494835 kubelet[2630]: E0909 00:02:40.494690 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nbs8t" Sep 9 00:02:40.494835 kubelet[2630]: E0909 00:02:40.494740 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nbs8t_calico-system(13bd77bc-168d-4e24-bcab-4df0554bc784)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nbs8t_calico-system(13bd77bc-168d-4e24-bcab-4df0554bc784)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nbs8t" podUID="13bd77bc-168d-4e24-bcab-4df0554bc784" Sep 9 00:02:40.496638 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f-shm.mount: Deactivated successfully. Sep 9 00:02:40.631234 kubelet[2630]: I0909 00:02:40.631161 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd" Sep 9 00:02:40.632051 kubelet[2630]: I0909 00:02:40.631997 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f" Sep 9 00:02:40.633648 kubelet[2630]: I0909 00:02:40.633629 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef" Sep 9 00:02:40.638598 kubelet[2630]: I0909 00:02:40.637090 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f" Sep 9 00:02:40.646643 containerd[1510]: time="2025-09-09T00:02:40.646589326Z" level=info msg="StopPodSandbox for \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\"" Sep 9 00:02:40.672067 containerd[1510]: time="2025-09-09T00:02:40.671440682Z" level=info msg="StopPodSandbox for \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\"" Sep 9 00:02:40.672067 containerd[1510]: time="2025-09-09T00:02:40.671475587Z" level=info msg="StopPodSandbox for \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\"" Sep 9 00:02:40.672067 containerd[1510]: time="2025-09-09T00:02:40.671649224Z" level=info msg="Ensure that sandbox 3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef in task-service has been cleanup successfully" Sep 9 00:02:40.672067 containerd[1510]: time="2025-09-09T00:02:40.671651198Z" level=info msg="Ensure that sandbox aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f in task-service has been cleanup successfully" Sep 9 00:02:40.672203 containerd[1510]: time="2025-09-09T00:02:40.672106343Z" level=info msg="TearDown network for sandbox \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\" successfully" Sep 9 00:02:40.672203 containerd[1510]: time="2025-09-09T00:02:40.672130328Z" level=info msg="StopPodSandbox for \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\" returns successfully" Sep 9 00:02:40.672392 kubelet[2630]: E0909 00:02:40.672353 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:40.672894 containerd[1510]: time="2025-09-09T00:02:40.672846784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rm7mk,Uid:8b3fc743-df1d-4d9a-822b-01f3200e3e51,Namespace:kube-system,Attempt:1,}" Sep 9 00:02:40.673486 containerd[1510]: time="2025-09-09T00:02:40.673272093Z" level=info msg="Ensure that sandbox 47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd in task-service has been cleanup successfully" Sep 9 00:02:40.673486 containerd[1510]: time="2025-09-09T00:02:40.673390206Z" level=info msg="TearDown network for sandbox \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\" successfully" Sep 9 00:02:40.673486 containerd[1510]: time="2025-09-09T00:02:40.673409312Z" level=info msg="StopPodSandbox for \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\" returns successfully" Sep 9 00:02:40.673661 containerd[1510]: time="2025-09-09T00:02:40.673643300Z" level=info msg="TearDown network for sandbox \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\" successfully" Sep 9 00:02:40.673701 kubelet[2630]: E0909 00:02:40.673693 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:40.673807 containerd[1510]: time="2025-09-09T00:02:40.673763888Z" level=info msg="StopPodSandbox for \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\" returns successfully" Sep 9 00:02:40.674959 containerd[1510]: time="2025-09-09T00:02:40.674544183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b854f49bb-nlfqw,Uid:fe12ff7b-73e6-42d0-a348-29ad8070fac9,Namespace:calico-system,Attempt:1,}" Sep 9 00:02:40.674959 containerd[1510]: time="2025-09-09T00:02:40.674807939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4p4qb,Uid:6de14301-f214-422d-9e12-0b69107cbf97,Namespace:kube-system,Attempt:1,}" Sep 9 00:02:40.675089 kubelet[2630]: I0909 00:02:40.675067 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3" Sep 9 00:02:40.675531 containerd[1510]: time="2025-09-09T00:02:40.675501792Z" level=info msg="StopPodSandbox for \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\"" Sep 9 00:02:40.677596 systemd[1]: run-netns-cni\x2d72972dd3\x2dfd17\x2d02d6\x2d94a9\x2dfa68d6f09252.mount: Deactivated successfully. Sep 9 00:02:40.677712 systemd[1]: run-netns-cni\x2d62ba999b\x2dd546\x2dc7c9\x2daee0\x2da87a1d6e918c.mount: Deactivated successfully. Sep 9 00:02:40.680254 kubelet[2630]: I0909 00:02:40.680158 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb" Sep 9 00:02:40.682191 containerd[1510]: time="2025-09-09T00:02:40.681102948Z" level=info msg="StopPodSandbox for \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\"" Sep 9 00:02:40.682191 containerd[1510]: time="2025-09-09T00:02:40.681335304Z" level=info msg="Ensure that sandbox ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb in task-service has been cleanup successfully" Sep 9 00:02:40.682191 containerd[1510]: time="2025-09-09T00:02:40.682060266Z" level=info msg="TearDown network for sandbox \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\" successfully" Sep 9 00:02:40.682191 containerd[1510]: time="2025-09-09T00:02:40.682074713Z" level=info msg="StopPodSandbox for \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\" returns successfully" Sep 9 00:02:40.692015 kubelet[2630]: I0909 00:02:40.691960 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3" Sep 9 00:02:40.697417 kubelet[2630]: I0909 00:02:40.697344 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66" Sep 9 00:02:40.709730 containerd[1510]: time="2025-09-09T00:02:40.682557431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ct9bv,Uid:9d38a1be-6323-41e6-8564-b477a0eb94a8,Namespace:calico-system,Attempt:1,}" Sep 9 00:02:40.709955 containerd[1510]: time="2025-09-09T00:02:40.690175094Z" level=info msg="StopPodSandbox for \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\"" Sep 9 00:02:40.710100 containerd[1510]: time="2025-09-09T00:02:40.696285857Z" level=info msg="StopPodSandbox for \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\"" Sep 9 00:02:40.710164 containerd[1510]: time="2025-09-09T00:02:40.710142233Z" level=info msg="Ensure that sandbox d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f in task-service has been cleanup successfully" Sep 9 00:02:40.710314 containerd[1510]: time="2025-09-09T00:02:40.710274271Z" level=info msg="Ensure that sandbox 0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3 in task-service has been cleanup successfully" Sep 9 00:02:40.710366 containerd[1510]: time="2025-09-09T00:02:40.710351035Z" level=info msg="TearDown network for sandbox \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\" successfully" Sep 9 00:02:40.710366 containerd[1510]: time="2025-09-09T00:02:40.710362517Z" level=info msg="StopPodSandbox for \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\" returns successfully" Sep 9 00:02:40.710940 containerd[1510]: time="2025-09-09T00:02:40.697772229Z" level=info msg="StopPodSandbox for \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\"" Sep 9 00:02:40.710940 containerd[1510]: time="2025-09-09T00:02:40.710480880Z" level=info msg="TearDown network for sandbox \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\" successfully" Sep 9 00:02:40.710940 containerd[1510]: time="2025-09-09T00:02:40.710496840Z" level=info msg="StopPodSandbox for \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\" returns successfully" Sep 9 00:02:40.710940 containerd[1510]: time="2025-09-09T00:02:40.710577191Z" level=info msg="Ensure that sandbox 2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66 in task-service has been cleanup successfully" Sep 9 00:02:40.710940 containerd[1510]: time="2025-09-09T00:02:40.710732191Z" level=info msg="TearDown network for sandbox \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\" successfully" Sep 9 00:02:40.710940 containerd[1510]: time="2025-09-09T00:02:40.710742431Z" level=info msg="StopPodSandbox for \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\" returns successfully" Sep 9 00:02:40.710940 containerd[1510]: time="2025-09-09T00:02:40.710930685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c5d8b85b-fq8gn,Uid:57b1360c-3eda-441d-822b-cfab485ba025,Namespace:calico-system,Attempt:1,}" Sep 9 00:02:40.711214 containerd[1510]: time="2025-09-09T00:02:40.711175084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbs8t,Uid:13bd77bc-168d-4e24-bcab-4df0554bc784,Namespace:calico-system,Attempt:1,}" Sep 9 00:02:40.711364 containerd[1510]: time="2025-09-09T00:02:40.711342378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79646b996b-cw46r,Uid:abf21d55-a6e7-4fd1-ad4c-e82f7525f680,Namespace:calico-apiserver,Attempt:1,}" Sep 9 00:02:40.728491 containerd[1510]: time="2025-09-09T00:02:40.728460574Z" level=info msg="Ensure that sandbox 63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3 in task-service has been cleanup successfully" Sep 9 00:02:40.728660 containerd[1510]: time="2025-09-09T00:02:40.728641905Z" level=info msg="TearDown network for sandbox \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\" successfully" Sep 9 00:02:40.728692 containerd[1510]: time="2025-09-09T00:02:40.728660239Z" level=info msg="StopPodSandbox for \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\" returns successfully" Sep 9 00:02:40.729362 containerd[1510]: time="2025-09-09T00:02:40.729341539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79646b996b-6z92f,Uid:2b12fb94-9dbc-4031-90bc-b634460d49b8,Namespace:calico-apiserver,Attempt:1,}" Sep 9 00:02:40.825694 containerd[1510]: time="2025-09-09T00:02:40.825640267Z" level=error msg="Failed to destroy network for sandbox \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.827337 containerd[1510]: time="2025-09-09T00:02:40.827312699Z" level=error msg="encountered an error cleaning up failed sandbox \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.828937 containerd[1510]: time="2025-09-09T00:02:40.828912556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b854f49bb-nlfqw,Uid:fe12ff7b-73e6-42d0-a348-29ad8070fac9,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.830884 kubelet[2630]: E0909 00:02:40.830622 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.830884 kubelet[2630]: E0909 00:02:40.830828 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b854f49bb-nlfqw" Sep 9 00:02:40.830884 kubelet[2630]: E0909 00:02:40.830853 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b854f49bb-nlfqw" Sep 9 00:02:40.831472 kubelet[2630]: E0909 00:02:40.831025 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b854f49bb-nlfqw_calico-system(fe12ff7b-73e6-42d0-a348-29ad8070fac9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b854f49bb-nlfqw_calico-system(fe12ff7b-73e6-42d0-a348-29ad8070fac9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b854f49bb-nlfqw" podUID="fe12ff7b-73e6-42d0-a348-29ad8070fac9" Sep 9 00:02:40.838584 containerd[1510]: time="2025-09-09T00:02:40.838409891Z" level=error msg="Failed to destroy network for sandbox \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.838976 containerd[1510]: time="2025-09-09T00:02:40.838930530Z" level=error msg="Failed to destroy network for sandbox \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.839596 containerd[1510]: time="2025-09-09T00:02:40.839567557Z" level=error msg="encountered an error cleaning up failed sandbox \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.841726 containerd[1510]: time="2025-09-09T00:02:40.841000199Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4p4qb,Uid:6de14301-f214-422d-9e12-0b69107cbf97,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.842515 containerd[1510]: time="2025-09-09T00:02:40.841938161Z" level=error msg="encountered an error cleaning up failed sandbox \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.843238 kubelet[2630]: E0909 00:02:40.843201 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.843346 kubelet[2630]: E0909 00:02:40.843267 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4p4qb" Sep 9 00:02:40.843346 kubelet[2630]: E0909 00:02:40.843293 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4p4qb" Sep 9 00:02:40.843487 kubelet[2630]: E0909 00:02:40.843335 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4p4qb_kube-system(6de14301-f214-422d-9e12-0b69107cbf97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4p4qb_kube-system(6de14301-f214-422d-9e12-0b69107cbf97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4p4qb" podUID="6de14301-f214-422d-9e12-0b69107cbf97" Sep 9 00:02:40.843702 containerd[1510]: time="2025-09-09T00:02:40.843202126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rm7mk,Uid:8b3fc743-df1d-4d9a-822b-01f3200e3e51,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.844245 kubelet[2630]: E0909 00:02:40.844218 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.844340 kubelet[2630]: E0909 00:02:40.844250 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rm7mk" Sep 9 00:02:40.844340 kubelet[2630]: E0909 00:02:40.844266 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rm7mk" Sep 9 00:02:40.844340 kubelet[2630]: E0909 00:02:40.844290 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rm7mk_kube-system(8b3fc743-df1d-4d9a-822b-01f3200e3e51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rm7mk_kube-system(8b3fc743-df1d-4d9a-822b-01f3200e3e51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rm7mk" podUID="8b3fc743-df1d-4d9a-822b-01f3200e3e51" Sep 9 00:02:40.910155 containerd[1510]: time="2025-09-09T00:02:40.910013747Z" level=error msg="Failed to destroy network for sandbox \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.910931 containerd[1510]: time="2025-09-09T00:02:40.910783022Z" level=error msg="encountered an error cleaning up failed sandbox \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.911168 containerd[1510]: time="2025-09-09T00:02:40.911106480Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79646b996b-6z92f,Uid:2b12fb94-9dbc-4031-90bc-b634460d49b8,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.911456 kubelet[2630]: E0909 00:02:40.911409 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.911573 kubelet[2630]: E0909 00:02:40.911485 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79646b996b-6z92f" Sep 9 00:02:40.911573 kubelet[2630]: E0909 00:02:40.911518 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79646b996b-6z92f" Sep 9 00:02:40.911656 kubelet[2630]: E0909 00:02:40.911566 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79646b996b-6z92f_calico-apiserver(2b12fb94-9dbc-4031-90bc-b634460d49b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79646b996b-6z92f_calico-apiserver(2b12fb94-9dbc-4031-90bc-b634460d49b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79646b996b-6z92f" podUID="2b12fb94-9dbc-4031-90bc-b634460d49b8" Sep 9 00:02:40.921735 containerd[1510]: time="2025-09-09T00:02:40.921006012Z" level=error msg="Failed to destroy network for sandbox \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.921735 containerd[1510]: time="2025-09-09T00:02:40.921534595Z" level=error msg="encountered an error cleaning up failed sandbox \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.921735 containerd[1510]: time="2025-09-09T00:02:40.921607933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c5d8b85b-fq8gn,Uid:57b1360c-3eda-441d-822b-cfab485ba025,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.922148 kubelet[2630]: E0909 00:02:40.921884 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.922148 kubelet[2630]: E0909 00:02:40.921943 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68c5d8b85b-fq8gn" Sep 9 00:02:40.922148 kubelet[2630]: E0909 00:02:40.921964 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68c5d8b85b-fq8gn" Sep 9 00:02:40.923536 kubelet[2630]: E0909 00:02:40.922013 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68c5d8b85b-fq8gn_calico-system(57b1360c-3eda-441d-822b-cfab485ba025)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68c5d8b85b-fq8gn_calico-system(57b1360c-3eda-441d-822b-cfab485ba025)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68c5d8b85b-fq8gn" podUID="57b1360c-3eda-441d-822b-cfab485ba025" Sep 9 00:02:40.924629 containerd[1510]: time="2025-09-09T00:02:40.924565641Z" level=error msg="Failed to destroy network for sandbox \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.925780 containerd[1510]: time="2025-09-09T00:02:40.925756097Z" level=error msg="encountered an error cleaning up failed sandbox \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.925916 containerd[1510]: time="2025-09-09T00:02:40.925873087Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbs8t,Uid:13bd77bc-168d-4e24-bcab-4df0554bc784,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.926147 kubelet[2630]: E0909 00:02:40.926056 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.926147 kubelet[2630]: E0909 00:02:40.926091 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nbs8t" Sep 9 00:02:40.926147 kubelet[2630]: E0909 00:02:40.926109 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nbs8t" Sep 9 00:02:40.926775 kubelet[2630]: E0909 00:02:40.926145 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nbs8t_calico-system(13bd77bc-168d-4e24-bcab-4df0554bc784)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nbs8t_calico-system(13bd77bc-168d-4e24-bcab-4df0554bc784)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nbs8t" podUID="13bd77bc-168d-4e24-bcab-4df0554bc784" Sep 9 00:02:40.928295 containerd[1510]: time="2025-09-09T00:02:40.928250114Z" level=error msg="Failed to destroy network for sandbox \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.928696 containerd[1510]: time="2025-09-09T00:02:40.928665394Z" level=error msg="encountered an error cleaning up failed sandbox \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.928757 containerd[1510]: time="2025-09-09T00:02:40.928734835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79646b996b-cw46r,Uid:abf21d55-a6e7-4fd1-ad4c-e82f7525f680,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.929170 kubelet[2630]: E0909 00:02:40.928935 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.929170 kubelet[2630]: E0909 00:02:40.928985 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79646b996b-cw46r" Sep 9 00:02:40.929170 kubelet[2630]: E0909 00:02:40.929004 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79646b996b-cw46r" Sep 9 00:02:40.929283 kubelet[2630]: E0909 00:02:40.929064 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79646b996b-cw46r_calico-apiserver(abf21d55-a6e7-4fd1-ad4c-e82f7525f680)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79646b996b-cw46r_calico-apiserver(abf21d55-a6e7-4fd1-ad4c-e82f7525f680)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79646b996b-cw46r" podUID="abf21d55-a6e7-4fd1-ad4c-e82f7525f680" Sep 9 00:02:40.931680 containerd[1510]: time="2025-09-09T00:02:40.931515309Z" level=error msg="Failed to destroy network for sandbox \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.931969 containerd[1510]: time="2025-09-09T00:02:40.931934146Z" level=error msg="encountered an error cleaning up failed sandbox \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.932107 containerd[1510]: time="2025-09-09T00:02:40.931991935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ct9bv,Uid:9d38a1be-6323-41e6-8564-b477a0eb94a8,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.932231 kubelet[2630]: E0909 00:02:40.932201 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:40.932289 kubelet[2630]: E0909 00:02:40.932239 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-ct9bv" Sep 9 00:02:40.932289 kubelet[2630]: E0909 00:02:40.932258 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-ct9bv" Sep 9 00:02:40.932369 kubelet[2630]: E0909 00:02:40.932290 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-ct9bv_calico-system(9d38a1be-6323-41e6-8564-b477a0eb94a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-ct9bv_calico-system(9d38a1be-6323-41e6-8564-b477a0eb94a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-ct9bv" podUID="9d38a1be-6323-41e6-8564-b477a0eb94a8" Sep 9 00:02:40.977979 systemd[1]: run-netns-cni\x2d6cfc39d1\x2d8707\x2de867\x2d4b57\x2d5030d267b1f5.mount: Deactivated successfully. Sep 9 00:02:40.978155 systemd[1]: run-netns-cni\x2daa1d6cc2\x2dfdc0\x2d3494\x2dff73\x2d2ba5ad32d558.mount: Deactivated successfully. Sep 9 00:02:40.978247 systemd[1]: run-netns-cni\x2da9dec991\x2d19a0\x2d5b04\x2d4cd3\x2d918ab0b83fe8.mount: Deactivated successfully. Sep 9 00:02:40.978345 systemd[1]: run-netns-cni\x2d4089b2c3\x2d7942\x2dcacf\x2dac17\x2da02521e26700.mount: Deactivated successfully. Sep 9 00:02:40.978455 systemd[1]: run-netns-cni\x2d11b465e8\x2d4fe0\x2dc9d0\x2d05f3\x2da09482d7d373.mount: Deactivated successfully. Sep 9 00:02:40.978548 systemd[1]: run-netns-cni\x2d94469074\x2da737\x2d8ce6\x2d5577\x2dd8fceb7c3415.mount: Deactivated successfully. Sep 9 00:02:41.700270 kubelet[2630]: I0909 00:02:41.700237 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41" Sep 9 00:02:41.701001 containerd[1510]: time="2025-09-09T00:02:41.700967095Z" level=info msg="StopPodSandbox for \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\"" Sep 9 00:02:41.701331 containerd[1510]: time="2025-09-09T00:02:41.701249967Z" level=info msg="Ensure that sandbox 2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41 in task-service has been cleanup successfully" Sep 9 00:02:41.701511 containerd[1510]: time="2025-09-09T00:02:41.701478216Z" level=info msg="TearDown network for sandbox \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\" successfully" Sep 9 00:02:41.701511 containerd[1510]: time="2025-09-09T00:02:41.701498253Z" level=info msg="StopPodSandbox for \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\" returns successfully" Sep 9 00:02:41.701887 containerd[1510]: time="2025-09-09T00:02:41.701860774Z" level=info msg="StopPodSandbox for \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\"" Sep 9 00:02:41.701998 containerd[1510]: time="2025-09-09T00:02:41.701966453Z" level=info msg="TearDown network for sandbox \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\" successfully" Sep 9 00:02:41.702069 containerd[1510]: time="2025-09-09T00:02:41.701998292Z" level=info msg="StopPodSandbox for \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\" returns successfully" Sep 9 00:02:41.702857 kubelet[2630]: I0909 00:02:41.702514 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c" Sep 9 00:02:41.702911 containerd[1510]: time="2025-09-09T00:02:41.702869670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ct9bv,Uid:9d38a1be-6323-41e6-8564-b477a0eb94a8,Namespace:calico-system,Attempt:2,}" Sep 9 00:02:41.704593 containerd[1510]: time="2025-09-09T00:02:41.704212633Z" level=info msg="StopPodSandbox for \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\"" Sep 9 00:02:41.704593 containerd[1510]: time="2025-09-09T00:02:41.704393834Z" level=info msg="Ensure that sandbox f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c in task-service has been cleanup successfully" Sep 9 00:02:41.705976 kubelet[2630]: I0909 00:02:41.705956 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a" Sep 9 00:02:41.706850 systemd[1]: run-netns-cni\x2de4b442da\x2de172\x2d942b\x2d2cc1\x2dabbbcf1b5ba4.mount: Deactivated successfully. Sep 9 00:02:41.706996 systemd[1]: run-netns-cni\x2d39bf665e\x2d5ee8\x2d4047\x2d27e3\x2dd1eaecdd61e9.mount: Deactivated successfully. Sep 9 00:02:41.707190 containerd[1510]: time="2025-09-09T00:02:41.706967770Z" level=info msg="StopPodSandbox for \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\"" Sep 9 00:02:41.707190 containerd[1510]: time="2025-09-09T00:02:41.707175920Z" level=info msg="Ensure that sandbox a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a in task-service has been cleanup successfully" Sep 9 00:02:41.708323 kubelet[2630]: I0909 00:02:41.708304 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d" Sep 9 00:02:41.709316 containerd[1510]: time="2025-09-09T00:02:41.708736011Z" level=info msg="StopPodSandbox for \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\"" Sep 9 00:02:41.709316 containerd[1510]: time="2025-09-09T00:02:41.708917843Z" level=info msg="Ensure that sandbox cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d in task-service has been cleanup successfully" Sep 9 00:02:41.710352 kubelet[2630]: I0909 00:02:41.710334 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164" Sep 9 00:02:41.712597 containerd[1510]: time="2025-09-09T00:02:41.712090524Z" level=info msg="StopPodSandbox for \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\"" Sep 9 00:02:41.712597 containerd[1510]: time="2025-09-09T00:02:41.712332528Z" level=info msg="Ensure that sandbox b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164 in task-service has been cleanup successfully" Sep 9 00:02:41.713249 containerd[1510]: time="2025-09-09T00:02:41.713196993Z" level=info msg="TearDown network for sandbox \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\" successfully" Sep 9 00:02:41.713249 containerd[1510]: time="2025-09-09T00:02:41.713222250Z" level=info msg="StopPodSandbox for \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\" returns successfully" Sep 9 00:02:41.713467 kubelet[2630]: I0909 00:02:41.713438 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1" Sep 9 00:02:41.713716 containerd[1510]: time="2025-09-09T00:02:41.713682145Z" level=info msg="TearDown network for sandbox \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\" successfully" Sep 9 00:02:41.713921 containerd[1510]: time="2025-09-09T00:02:41.713770741Z" level=info msg="StopPodSandbox for \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\" returns successfully" Sep 9 00:02:41.714904 containerd[1510]: time="2025-09-09T00:02:41.714015211Z" level=info msg="TearDown network for sandbox \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\" successfully" Sep 9 00:02:41.714904 containerd[1510]: time="2025-09-09T00:02:41.714059674Z" level=info msg="StopPodSandbox for \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\" returns successfully" Sep 9 00:02:41.714904 containerd[1510]: time="2025-09-09T00:02:41.714238209Z" level=info msg="StopPodSandbox for \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\"" Sep 9 00:02:41.714904 containerd[1510]: time="2025-09-09T00:02:41.714336915Z" level=info msg="TearDown network for sandbox \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\" successfully" Sep 9 00:02:41.714904 containerd[1510]: time="2025-09-09T00:02:41.714354097Z" level=info msg="StopPodSandbox for \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\" returns successfully" Sep 9 00:02:41.714904 containerd[1510]: time="2025-09-09T00:02:41.714403811Z" level=info msg="StopPodSandbox for \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\"" Sep 9 00:02:41.714904 containerd[1510]: time="2025-09-09T00:02:41.714499099Z" level=info msg="TearDown network for sandbox \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\" successfully" Sep 9 00:02:41.714904 containerd[1510]: time="2025-09-09T00:02:41.714512294Z" level=info msg="StopPodSandbox for \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\" returns successfully" Sep 9 00:02:41.714904 containerd[1510]: time="2025-09-09T00:02:41.714559452Z" level=info msg="StopPodSandbox for \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\"" Sep 9 00:02:41.714904 containerd[1510]: time="2025-09-09T00:02:41.714759559Z" level=info msg="Ensure that sandbox 74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1 in task-service has been cleanup successfully" Sep 9 00:02:41.717161 containerd[1510]: time="2025-09-09T00:02:41.716128821Z" level=info msg="TearDown network for sandbox \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\" successfully" Sep 9 00:02:41.717161 containerd[1510]: time="2025-09-09T00:02:41.716195667Z" level=info msg="StopPodSandbox for \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\" returns successfully" Sep 9 00:02:41.717161 containerd[1510]: time="2025-09-09T00:02:41.716290566Z" level=info msg="TearDown network for sandbox \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\" successfully" Sep 9 00:02:41.717161 containerd[1510]: time="2025-09-09T00:02:41.716304993Z" level=info msg="StopPodSandbox for \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\" returns successfully" Sep 9 00:02:41.717161 containerd[1510]: time="2025-09-09T00:02:41.716359866Z" level=info msg="StopPodSandbox for \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\"" Sep 9 00:02:41.717161 containerd[1510]: time="2025-09-09T00:02:41.716469411Z" level=info msg="TearDown network for sandbox \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\" successfully" Sep 9 00:02:41.717161 containerd[1510]: time="2025-09-09T00:02:41.716481844Z" level=info msg="StopPodSandbox for \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\" returns successfully" Sep 9 00:02:41.717092 systemd[1]: run-netns-cni\x2dc706a61a\x2d11fe\x2d80a3\x2db2f3\x2d19eccfbe43a9.mount: Deactivated successfully. Sep 9 00:02:41.717226 systemd[1]: run-netns-cni\x2dd98c3679\x2d0199\x2d28e6\x2d148a\x2d539547742a3e.mount: Deactivated successfully. Sep 9 00:02:41.717524 containerd[1510]: time="2025-09-09T00:02:41.717446076Z" level=info msg="StopPodSandbox for \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\"" Sep 9 00:02:41.717556 containerd[1510]: time="2025-09-09T00:02:41.717530164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b854f49bb-nlfqw,Uid:fe12ff7b-73e6-42d0-a348-29ad8070fac9,Namespace:calico-system,Attempt:2,}" Sep 9 00:02:41.717638 containerd[1510]: time="2025-09-09T00:02:41.717570881Z" level=info msg="TearDown network for sandbox \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\" successfully" Sep 9 00:02:41.717638 containerd[1510]: time="2025-09-09T00:02:41.717614072Z" level=info msg="StopPodSandbox for \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\" returns successfully" Sep 9 00:02:41.717712 containerd[1510]: time="2025-09-09T00:02:41.717673834Z" level=info msg="StopPodSandbox for \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\"" Sep 9 00:02:41.717820 containerd[1510]: time="2025-09-09T00:02:41.717797857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79646b996b-cw46r,Uid:abf21d55-a6e7-4fd1-ad4c-e82f7525f680,Namespace:calico-apiserver,Attempt:2,}" Sep 9 00:02:41.717858 containerd[1510]: time="2025-09-09T00:02:41.717842561Z" level=info msg="TearDown network for sandbox \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\" successfully" Sep 9 00:02:41.717889 containerd[1510]: time="2025-09-09T00:02:41.717859282Z" level=info msg="StopPodSandbox for \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\" returns successfully" Sep 9 00:02:41.718020 containerd[1510]: time="2025-09-09T00:02:41.717977044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c5d8b85b-fq8gn,Uid:57b1360c-3eda-441d-822b-cfab485ba025,Namespace:calico-system,Attempt:2,}" Sep 9 00:02:41.719466 kubelet[2630]: E0909 00:02:41.719436 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:41.720179 containerd[1510]: time="2025-09-09T00:02:41.720148484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbs8t,Uid:13bd77bc-168d-4e24-bcab-4df0554bc784,Namespace:calico-system,Attempt:2,}" Sep 9 00:02:41.720459 containerd[1510]: time="2025-09-09T00:02:41.720414994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rm7mk,Uid:8b3fc743-df1d-4d9a-822b-01f3200e3e51,Namespace:kube-system,Attempt:2,}" Sep 9 00:02:41.720840 kubelet[2630]: I0909 00:02:41.720805 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6" Sep 9 00:02:41.721774 containerd[1510]: time="2025-09-09T00:02:41.721747869Z" level=info msg="StopPodSandbox for \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\"" Sep 9 00:02:41.721984 containerd[1510]: time="2025-09-09T00:02:41.721965758Z" level=info msg="Ensure that sandbox b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6 in task-service has been cleanup successfully" Sep 9 00:02:41.723410 containerd[1510]: time="2025-09-09T00:02:41.723128593Z" level=info msg="TearDown network for sandbox \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\" successfully" Sep 9 00:02:41.723410 containerd[1510]: time="2025-09-09T00:02:41.723269016Z" level=info msg="StopPodSandbox for \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\" returns successfully" Sep 9 00:02:41.724801 containerd[1510]: time="2025-09-09T00:02:41.724608674Z" level=info msg="StopPodSandbox for \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\"" Sep 9 00:02:41.724874 containerd[1510]: time="2025-09-09T00:02:41.724845879Z" level=info msg="TearDown network for sandbox \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\" successfully" Sep 9 00:02:41.724874 containerd[1510]: time="2025-09-09T00:02:41.724862670Z" level=info msg="StopPodSandbox for \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\" returns successfully" Sep 9 00:02:41.725185 kubelet[2630]: E0909 00:02:41.725159 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:41.726851 containerd[1510]: time="2025-09-09T00:02:41.725617990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4p4qb,Uid:6de14301-f214-422d-9e12-0b69107cbf97,Namespace:kube-system,Attempt:2,}" Sep 9 00:02:41.726768 systemd[1]: run-netns-cni\x2d70313e2e\x2d0397\x2d8572\x2d5179\x2d20d73c23451e.mount: Deactivated successfully. Sep 9 00:02:41.726864 systemd[1]: run-netns-cni\x2d1cb36877\x2d6554\x2d7766\x2d144e\x2dbbdd4e567ed6.mount: Deactivated successfully. Sep 9 00:02:41.731291 kubelet[2630]: I0909 00:02:41.731260 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473" Sep 9 00:02:41.731957 systemd[1]: run-netns-cni\x2d803f962b\x2d2d69\x2d2126\x2d0218\x2d1e691824f734.mount: Deactivated successfully. Sep 9 00:02:41.733042 containerd[1510]: time="2025-09-09T00:02:41.732302418Z" level=info msg="StopPodSandbox for \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\"" Sep 9 00:02:41.733042 containerd[1510]: time="2025-09-09T00:02:41.732570492Z" level=info msg="Ensure that sandbox e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473 in task-service has been cleanup successfully" Sep 9 00:02:41.733161 containerd[1510]: time="2025-09-09T00:02:41.733141485Z" level=info msg="TearDown network for sandbox \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\" successfully" Sep 9 00:02:41.733189 containerd[1510]: time="2025-09-09T00:02:41.733162514Z" level=info msg="StopPodSandbox for \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\" returns successfully" Sep 9 00:02:41.733956 containerd[1510]: time="2025-09-09T00:02:41.733919556Z" level=info msg="StopPodSandbox for \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\"" Sep 9 00:02:41.737056 containerd[1510]: time="2025-09-09T00:02:41.734241451Z" level=info msg="TearDown network for sandbox \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\" successfully" Sep 9 00:02:41.737056 containerd[1510]: time="2025-09-09T00:02:41.734262401Z" level=info msg="StopPodSandbox for \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\" returns successfully" Sep 9 00:02:41.737056 containerd[1510]: time="2025-09-09T00:02:41.734995298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79646b996b-6z92f,Uid:2b12fb94-9dbc-4031-90bc-b634460d49b8,Namespace:calico-apiserver,Attempt:2,}" Sep 9 00:02:41.785601 containerd[1510]: time="2025-09-09T00:02:41.785503354Z" level=error msg="Failed to destroy network for sandbox \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.785959 containerd[1510]: time="2025-09-09T00:02:41.785917912Z" level=error msg="encountered an error cleaning up failed sandbox \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.786005 containerd[1510]: time="2025-09-09T00:02:41.785978937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ct9bv,Uid:9d38a1be-6323-41e6-8564-b477a0eb94a8,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.786250 kubelet[2630]: E0909 00:02:41.786211 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.786329 kubelet[2630]: E0909 00:02:41.786276 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-ct9bv" Sep 9 00:02:41.786329 kubelet[2630]: E0909 00:02:41.786299 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-ct9bv" Sep 9 00:02:41.786406 kubelet[2630]: E0909 00:02:41.786342 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-ct9bv_calico-system(9d38a1be-6323-41e6-8564-b477a0eb94a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-ct9bv_calico-system(9d38a1be-6323-41e6-8564-b477a0eb94a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-ct9bv" podUID="9d38a1be-6323-41e6-8564-b477a0eb94a8" Sep 9 00:02:41.924059 containerd[1510]: time="2025-09-09T00:02:41.923222222Z" level=error msg="Failed to destroy network for sandbox \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.928103 containerd[1510]: time="2025-09-09T00:02:41.924715197Z" level=error msg="encountered an error cleaning up failed sandbox \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.928498 containerd[1510]: time="2025-09-09T00:02:41.928317796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4p4qb,Uid:6de14301-f214-422d-9e12-0b69107cbf97,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.928688 kubelet[2630]: E0909 00:02:41.928640 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.928817 kubelet[2630]: E0909 00:02:41.928713 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4p4qb" Sep 9 00:02:41.928817 kubelet[2630]: E0909 00:02:41.928736 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4p4qb" Sep 9 00:02:41.928817 kubelet[2630]: E0909 00:02:41.928777 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4p4qb_kube-system(6de14301-f214-422d-9e12-0b69107cbf97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4p4qb_kube-system(6de14301-f214-422d-9e12-0b69107cbf97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4p4qb" podUID="6de14301-f214-422d-9e12-0b69107cbf97" Sep 9 00:02:41.934953 containerd[1510]: time="2025-09-09T00:02:41.934900975Z" level=error msg="Failed to destroy network for sandbox \"8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.935612 containerd[1510]: time="2025-09-09T00:02:41.935585691Z" level=error msg="encountered an error cleaning up failed sandbox \"8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.935748 containerd[1510]: time="2025-09-09T00:02:41.935725804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b854f49bb-nlfqw,Uid:fe12ff7b-73e6-42d0-a348-29ad8070fac9,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.937251 kubelet[2630]: E0909 00:02:41.936053 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.937251 kubelet[2630]: E0909 00:02:41.936136 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b854f49bb-nlfqw" Sep 9 00:02:41.937251 kubelet[2630]: E0909 00:02:41.936163 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b854f49bb-nlfqw" Sep 9 00:02:41.937416 kubelet[2630]: E0909 00:02:41.936221 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b854f49bb-nlfqw_calico-system(fe12ff7b-73e6-42d0-a348-29ad8070fac9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b854f49bb-nlfqw_calico-system(fe12ff7b-73e6-42d0-a348-29ad8070fac9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b854f49bb-nlfqw" podUID="fe12ff7b-73e6-42d0-a348-29ad8070fac9" Sep 9 00:02:41.944055 containerd[1510]: time="2025-09-09T00:02:41.943976576Z" level=error msg="Failed to destroy network for sandbox \"e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.944827 containerd[1510]: time="2025-09-09T00:02:41.944791136Z" level=error msg="encountered an error cleaning up failed sandbox \"e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.944939 containerd[1510]: time="2025-09-09T00:02:41.944901644Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c5d8b85b-fq8gn,Uid:57b1360c-3eda-441d-822b-cfab485ba025,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.945663 kubelet[2630]: E0909 00:02:41.945258 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.945663 kubelet[2630]: E0909 00:02:41.945339 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68c5d8b85b-fq8gn" Sep 9 00:02:41.945663 kubelet[2630]: E0909 00:02:41.945366 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68c5d8b85b-fq8gn" Sep 9 00:02:41.945798 kubelet[2630]: E0909 00:02:41.945427 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68c5d8b85b-fq8gn_calico-system(57b1360c-3eda-441d-822b-cfab485ba025)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68c5d8b85b-fq8gn_calico-system(57b1360c-3eda-441d-822b-cfab485ba025)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68c5d8b85b-fq8gn" podUID="57b1360c-3eda-441d-822b-cfab485ba025" Sep 9 00:02:41.953317 containerd[1510]: time="2025-09-09T00:02:41.953166642Z" level=error msg="Failed to destroy network for sandbox \"db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.954002 containerd[1510]: time="2025-09-09T00:02:41.953979790Z" level=error msg="encountered an error cleaning up failed sandbox \"db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.954175 containerd[1510]: time="2025-09-09T00:02:41.954141224Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79646b996b-cw46r,Uid:abf21d55-a6e7-4fd1-ad4c-e82f7525f680,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.954743 kubelet[2630]: E0909 00:02:41.954704 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.954874 kubelet[2630]: E0909 00:02:41.954857 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79646b996b-cw46r" Sep 9 00:02:41.954998 kubelet[2630]: E0909 00:02:41.954978 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79646b996b-cw46r" Sep 9 00:02:41.955157 kubelet[2630]: E0909 00:02:41.955112 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79646b996b-cw46r_calico-apiserver(abf21d55-a6e7-4fd1-ad4c-e82f7525f680)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79646b996b-cw46r_calico-apiserver(abf21d55-a6e7-4fd1-ad4c-e82f7525f680)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79646b996b-cw46r" podUID="abf21d55-a6e7-4fd1-ad4c-e82f7525f680" Sep 9 00:02:41.966710 containerd[1510]: time="2025-09-09T00:02:41.966650355Z" level=error msg="Failed to destroy network for sandbox \"0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.967569 containerd[1510]: time="2025-09-09T00:02:41.967505903Z" level=error msg="encountered an error cleaning up failed sandbox \"0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.967736 containerd[1510]: time="2025-09-09T00:02:41.967584831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbs8t,Uid:13bd77bc-168d-4e24-bcab-4df0554bc784,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.967915 kubelet[2630]: E0909 00:02:41.967868 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.968016 kubelet[2630]: E0909 00:02:41.967935 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nbs8t" Sep 9 00:02:41.968016 kubelet[2630]: E0909 00:02:41.967959 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nbs8t" Sep 9 00:02:41.968145 kubelet[2630]: E0909 00:02:41.968010 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nbs8t_calico-system(13bd77bc-168d-4e24-bcab-4df0554bc784)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nbs8t_calico-system(13bd77bc-168d-4e24-bcab-4df0554bc784)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nbs8t" podUID="13bd77bc-168d-4e24-bcab-4df0554bc784" Sep 9 00:02:41.968310 containerd[1510]: time="2025-09-09T00:02:41.968259509Z" level=error msg="Failed to destroy network for sandbox \"cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.968721 containerd[1510]: time="2025-09-09T00:02:41.968692222Z" level=error msg="encountered an error cleaning up failed sandbox \"cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.968791 containerd[1510]: time="2025-09-09T00:02:41.968757014Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79646b996b-6z92f,Uid:2b12fb94-9dbc-4031-90bc-b634460d49b8,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.969237 kubelet[2630]: E0909 00:02:41.969026 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.969237 kubelet[2630]: E0909 00:02:41.969134 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79646b996b-6z92f" Sep 9 00:02:41.969237 kubelet[2630]: E0909 00:02:41.969155 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79646b996b-6z92f" Sep 9 00:02:41.969389 kubelet[2630]: E0909 00:02:41.969196 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79646b996b-6z92f_calico-apiserver(2b12fb94-9dbc-4031-90bc-b634460d49b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79646b996b-6z92f_calico-apiserver(2b12fb94-9dbc-4031-90bc-b634460d49b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79646b996b-6z92f" podUID="2b12fb94-9dbc-4031-90bc-b634460d49b8" Sep 9 00:02:41.977493 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2-shm.mount: Deactivated successfully. Sep 9 00:02:41.977788 systemd[1]: run-netns-cni\x2d3f77cf0a\x2d9374\x2defae\x2d0e27\x2d0e6456f73003.mount: Deactivated successfully. Sep 9 00:02:41.980112 containerd[1510]: time="2025-09-09T00:02:41.979986422Z" level=error msg="Failed to destroy network for sandbox \"5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.981472 containerd[1510]: time="2025-09-09T00:02:41.981428180Z" level=error msg="encountered an error cleaning up failed sandbox \"5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.981698 containerd[1510]: time="2025-09-09T00:02:41.981500878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rm7mk,Uid:8b3fc743-df1d-4d9a-822b-01f3200e3e51,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.981821 kubelet[2630]: E0909 00:02:41.981692 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:41.981821 kubelet[2630]: E0909 00:02:41.981747 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rm7mk" Sep 9 00:02:41.981821 kubelet[2630]: E0909 00:02:41.981774 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rm7mk" Sep 9 00:02:41.982124 kubelet[2630]: E0909 00:02:41.981812 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rm7mk_kube-system(8b3fc743-df1d-4d9a-822b-01f3200e3e51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rm7mk_kube-system(8b3fc743-df1d-4d9a-822b-01f3200e3e51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rm7mk" podUID="8b3fc743-df1d-4d9a-822b-01f3200e3e51" Sep 9 00:02:41.982248 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8-shm.mount: Deactivated successfully. Sep 9 00:02:42.735231 kubelet[2630]: I0909 00:02:42.735175 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d" Sep 9 00:02:42.735922 containerd[1510]: time="2025-09-09T00:02:42.735858027Z" level=info msg="StopPodSandbox for \"0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d\"" Sep 9 00:02:42.736487 containerd[1510]: time="2025-09-09T00:02:42.736114919Z" level=info msg="Ensure that sandbox 0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d in task-service has been cleanup successfully" Sep 9 00:02:42.738627 systemd[1]: run-netns-cni\x2d11ad64bd\x2de544\x2d0837\x2dc3bd\x2db7c68b795239.mount: Deactivated successfully. Sep 9 00:02:42.739315 containerd[1510]: time="2025-09-09T00:02:42.739288701Z" level=info msg="TearDown network for sandbox \"0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d\" successfully" Sep 9 00:02:42.739315 containerd[1510]: time="2025-09-09T00:02:42.739310393Z" level=info msg="StopPodSandbox for \"0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d\" returns successfully" Sep 9 00:02:42.739998 containerd[1510]: time="2025-09-09T00:02:42.739961785Z" level=info msg="StopPodSandbox for \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\"" Sep 9 00:02:42.740172 containerd[1510]: time="2025-09-09T00:02:42.740110696Z" level=info msg="TearDown network for sandbox \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\" successfully" Sep 9 00:02:42.740172 containerd[1510]: time="2025-09-09T00:02:42.740126225Z" level=info msg="StopPodSandbox for \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\" returns successfully" Sep 9 00:02:42.740589 containerd[1510]: time="2025-09-09T00:02:42.740481362Z" level=info msg="StopPodSandbox for \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\"" Sep 9 00:02:42.740657 containerd[1510]: time="2025-09-09T00:02:42.740578364Z" level=info msg="TearDown network for sandbox \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\" successfully" Sep 9 00:02:42.740657 containerd[1510]: time="2025-09-09T00:02:42.740630261Z" level=info msg="StopPodSandbox for \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\" returns successfully" Sep 9 00:02:42.740761 kubelet[2630]: I0909 00:02:42.740732 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e" Sep 9 00:02:42.741339 containerd[1510]: time="2025-09-09T00:02:42.741294619Z" level=info msg="StopPodSandbox for \"8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e\"" Sep 9 00:02:42.741598 containerd[1510]: time="2025-09-09T00:02:42.741569666Z" level=info msg="Ensure that sandbox 8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e in task-service has been cleanup successfully" Sep 9 00:02:42.741724 containerd[1510]: time="2025-09-09T00:02:42.741670125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbs8t,Uid:13bd77bc-168d-4e24-bcab-4df0554bc784,Namespace:calico-system,Attempt:3,}" Sep 9 00:02:42.742445 kubelet[2630]: I0909 00:02:42.742422 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8" Sep 9 00:02:42.743097 containerd[1510]: time="2025-09-09T00:02:42.742884877Z" level=info msg="StopPodSandbox for \"5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8\"" Sep 9 00:02:42.743157 containerd[1510]: time="2025-09-09T00:02:42.743126982Z" level=info msg="Ensure that sandbox 5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8 in task-service has been cleanup successfully" Sep 9 00:02:42.745060 containerd[1510]: time="2025-09-09T00:02:42.744164321Z" level=info msg="TearDown network for sandbox \"8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e\" successfully" Sep 9 00:02:42.745060 containerd[1510]: time="2025-09-09T00:02:42.744190330Z" level=info msg="StopPodSandbox for \"8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e\" returns successfully" Sep 9 00:02:42.745060 containerd[1510]: time="2025-09-09T00:02:42.744302701Z" level=info msg="TearDown network for sandbox \"5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8\" successfully" Sep 9 00:02:42.745060 containerd[1510]: time="2025-09-09T00:02:42.744320394Z" level=info msg="StopPodSandbox for \"5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8\" returns successfully" Sep 9 00:02:42.745060 containerd[1510]: time="2025-09-09T00:02:42.744483781Z" level=info msg="StopPodSandbox for \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\"" Sep 9 00:02:42.745060 containerd[1510]: time="2025-09-09T00:02:42.744577357Z" level=info msg="TearDown network for sandbox \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\" successfully" Sep 9 00:02:42.745060 containerd[1510]: time="2025-09-09T00:02:42.744590331Z" level=info msg="StopPodSandbox for \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\" returns successfully" Sep 9 00:02:42.745060 containerd[1510]: time="2025-09-09T00:02:42.744665582Z" level=info msg="StopPodSandbox for \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\"" Sep 9 00:02:42.745060 containerd[1510]: time="2025-09-09T00:02:42.744896326Z" level=info msg="Ensure that sandbox 6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af in task-service has been cleanup successfully" Sep 9 00:02:42.744496 systemd[1]: run-netns-cni\x2da90be9e8\x2d5653\x2ddf68\x2d8f7e\x2da3ffc1fc6fbb.mount: Deactivated successfully. Sep 9 00:02:42.745461 kubelet[2630]: I0909 00:02:42.744240 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af" Sep 9 00:02:42.745500 containerd[1510]: time="2025-09-09T00:02:42.745324580Z" level=info msg="StopPodSandbox for \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\"" Sep 9 00:02:42.745500 containerd[1510]: time="2025-09-09T00:02:42.745421452Z" level=info msg="TearDown network for sandbox \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\" successfully" Sep 9 00:02:42.745500 containerd[1510]: time="2025-09-09T00:02:42.745435899Z" level=info msg="StopPodSandbox for \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\" returns successfully" Sep 9 00:02:42.745500 containerd[1510]: time="2025-09-09T00:02:42.745449134Z" level=info msg="StopPodSandbox for \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\"" Sep 9 00:02:42.745626 containerd[1510]: time="2025-09-09T00:02:42.745554643Z" level=info msg="TearDown network for sandbox \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\" successfully" Sep 9 00:02:42.745626 containerd[1510]: time="2025-09-09T00:02:42.745571605Z" level=info msg="StopPodSandbox for \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\" returns successfully" Sep 9 00:02:42.745807 containerd[1510]: time="2025-09-09T00:02:42.745784384Z" level=info msg="TearDown network for sandbox \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\" successfully" Sep 9 00:02:42.745937 containerd[1510]: time="2025-09-09T00:02:42.745867080Z" level=info msg="StopPodSandbox for \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\" returns successfully" Sep 9 00:02:42.746785 containerd[1510]: time="2025-09-09T00:02:42.746436189Z" level=info msg="StopPodSandbox for \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\"" Sep 9 00:02:42.746785 containerd[1510]: time="2025-09-09T00:02:42.746548029Z" level=info msg="TearDown network for sandbox \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\" successfully" Sep 9 00:02:42.746785 containerd[1510]: time="2025-09-09T00:02:42.746563288Z" level=info msg="StopPodSandbox for \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\" returns successfully" Sep 9 00:02:42.746785 containerd[1510]: time="2025-09-09T00:02:42.746550083Z" level=info msg="StopPodSandbox for \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\"" Sep 9 00:02:42.746785 containerd[1510]: time="2025-09-09T00:02:42.746464582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b854f49bb-nlfqw,Uid:fe12ff7b-73e6-42d0-a348-29ad8070fac9,Namespace:calico-system,Attempt:3,}" Sep 9 00:02:42.746785 containerd[1510]: time="2025-09-09T00:02:42.746658687Z" level=info msg="TearDown network for sandbox \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\" successfully" Sep 9 00:02:42.746785 containerd[1510]: time="2025-09-09T00:02:42.746674056Z" level=info msg="StopPodSandbox for \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\" returns successfully" Sep 9 00:02:42.746970 systemd[1]: run-netns-cni\x2d57be87f8\x2dd285\x2d7557\x2d11e8\x2d0078be9c665a.mount: Deactivated successfully. Sep 9 00:02:42.747123 kubelet[2630]: E0909 00:02:42.747004 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:42.747091 systemd[1]: run-netns-cni\x2d9991ab1d\x2d0b9d\x2d245f\x2d035f\x2dd4d9df1215be.mount: Deactivated successfully. Sep 9 00:02:42.747475 containerd[1510]: time="2025-09-09T00:02:42.747351218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rm7mk,Uid:8b3fc743-df1d-4d9a-822b-01f3200e3e51,Namespace:kube-system,Attempt:3,}" Sep 9 00:02:42.747535 containerd[1510]: time="2025-09-09T00:02:42.747365535Z" level=info msg="StopPodSandbox for \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\"" Sep 9 00:02:42.747586 containerd[1510]: time="2025-09-09T00:02:42.747562696Z" level=info msg="TearDown network for sandbox \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\" successfully" Sep 9 00:02:42.747619 containerd[1510]: time="2025-09-09T00:02:42.747582473Z" level=info msg="StopPodSandbox for \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\" returns successfully" Sep 9 00:02:42.748322 kubelet[2630]: I0909 00:02:42.748296 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da" Sep 9 00:02:42.748713 kubelet[2630]: E0909 00:02:42.748681 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:42.748990 containerd[1510]: time="2025-09-09T00:02:42.748957596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4p4qb,Uid:6de14301-f214-422d-9e12-0b69107cbf97,Namespace:kube-system,Attempt:3,}" Sep 9 00:02:42.749118 containerd[1510]: time="2025-09-09T00:02:42.748967134Z" level=info msg="StopPodSandbox for \"cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da\"" Sep 9 00:02:42.749337 containerd[1510]: time="2025-09-09T00:02:42.749309527Z" level=info msg="Ensure that sandbox cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da in task-service has been cleanup successfully" Sep 9 00:02:42.749504 containerd[1510]: time="2025-09-09T00:02:42.749488473Z" level=info msg="TearDown network for sandbox \"cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da\" successfully" Sep 9 00:02:42.749504 containerd[1510]: time="2025-09-09T00:02:42.749502970Z" level=info msg="StopPodSandbox for \"cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da\" returns successfully" Sep 9 00:02:42.750113 containerd[1510]: time="2025-09-09T00:02:42.749865771Z" level=info msg="StopPodSandbox for \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\"" Sep 9 00:02:42.750113 containerd[1510]: time="2025-09-09T00:02:42.749944280Z" level=info msg="TearDown network for sandbox \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\" successfully" Sep 9 00:02:42.750113 containerd[1510]: time="2025-09-09T00:02:42.749954569Z" level=info msg="StopPodSandbox for \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\" returns successfully" Sep 9 00:02:42.750314 containerd[1510]: time="2025-09-09T00:02:42.750278197Z" level=info msg="StopPodSandbox for \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\"" Sep 9 00:02:42.750449 containerd[1510]: time="2025-09-09T00:02:42.750385188Z" level=info msg="TearDown network for sandbox \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\" successfully" Sep 9 00:02:42.750449 containerd[1510]: time="2025-09-09T00:02:42.750399084Z" level=info msg="StopPodSandbox for \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\" returns successfully" Sep 9 00:02:42.751055 kubelet[2630]: I0909 00:02:42.750622 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2" Sep 9 00:02:42.751125 containerd[1510]: time="2025-09-09T00:02:42.750903481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79646b996b-6z92f,Uid:2b12fb94-9dbc-4031-90bc-b634460d49b8,Namespace:calico-apiserver,Attempt:3,}" Sep 9 00:02:42.751486 containerd[1510]: time="2025-09-09T00:02:42.751213043Z" level=info msg="StopPodSandbox for \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\"" Sep 9 00:02:42.751486 containerd[1510]: time="2025-09-09T00:02:42.751371932Z" level=info msg="Ensure that sandbox 55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2 in task-service has been cleanup successfully" Sep 9 00:02:42.751647 containerd[1510]: time="2025-09-09T00:02:42.751615238Z" level=info msg="TearDown network for sandbox \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\" successfully" Sep 9 00:02:42.751774 containerd[1510]: time="2025-09-09T00:02:42.751745143Z" level=info msg="StopPodSandbox for \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\" returns successfully" Sep 9 00:02:42.752505 containerd[1510]: time="2025-09-09T00:02:42.752188205Z" level=info msg="StopPodSandbox for \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\"" Sep 9 00:02:42.752505 containerd[1510]: time="2025-09-09T00:02:42.752353586Z" level=info msg="TearDown network for sandbox \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\" successfully" Sep 9 00:02:42.752505 containerd[1510]: time="2025-09-09T00:02:42.752374776Z" level=info msg="StopPodSandbox for \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\" returns successfully" Sep 9 00:02:42.752682 kubelet[2630]: I0909 00:02:42.752606 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661" Sep 9 00:02:42.752765 containerd[1510]: time="2025-09-09T00:02:42.752698083Z" level=info msg="StopPodSandbox for \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\"" Sep 9 00:02:42.752820 containerd[1510]: time="2025-09-09T00:02:42.752780658Z" level=info msg="TearDown network for sandbox \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\" successfully" Sep 9 00:02:42.752820 containerd[1510]: time="2025-09-09T00:02:42.752791168Z" level=info msg="StopPodSandbox for \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\" returns successfully" Sep 9 00:02:42.752954 containerd[1510]: time="2025-09-09T00:02:42.752933755Z" level=info msg="StopPodSandbox for \"e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661\"" Sep 9 00:02:42.753140 containerd[1510]: time="2025-09-09T00:02:42.753124033Z" level=info msg="Ensure that sandbox e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661 in task-service has been cleanup successfully" Sep 9 00:02:42.753356 containerd[1510]: time="2025-09-09T00:02:42.753332304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ct9bv,Uid:9d38a1be-6323-41e6-8564-b477a0eb94a8,Namespace:calico-system,Attempt:3,}" Sep 9 00:02:42.753543 containerd[1510]: time="2025-09-09T00:02:42.753507674Z" level=info msg="TearDown network for sandbox \"e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661\" successfully" Sep 9 00:02:42.753543 containerd[1510]: time="2025-09-09T00:02:42.753535275Z" level=info msg="StopPodSandbox for \"e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661\" returns successfully" Sep 9 00:02:42.753910 containerd[1510]: time="2025-09-09T00:02:42.753750129Z" level=info msg="StopPodSandbox for \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\"" Sep 9 00:02:42.753910 containerd[1510]: time="2025-09-09T00:02:42.753841180Z" level=info msg="TearDown network for sandbox \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\" successfully" Sep 9 00:02:42.753910 containerd[1510]: time="2025-09-09T00:02:42.753854926Z" level=info msg="StopPodSandbox for \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\" returns successfully" Sep 9 00:02:42.754210 containerd[1510]: time="2025-09-09T00:02:42.754181109Z" level=info msg="StopPodSandbox for \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\"" Sep 9 00:02:42.754308 containerd[1510]: time="2025-09-09T00:02:42.754282519Z" level=info msg="TearDown network for sandbox \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\" successfully" Sep 9 00:02:42.754308 containerd[1510]: time="2025-09-09T00:02:42.754304761Z" level=info msg="StopPodSandbox for \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\" returns successfully" Sep 9 00:02:42.754598 kubelet[2630]: I0909 00:02:42.754564 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535" Sep 9 00:02:42.754893 containerd[1510]: time="2025-09-09T00:02:42.754762782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c5d8b85b-fq8gn,Uid:57b1360c-3eda-441d-822b-cfab485ba025,Namespace:calico-system,Attempt:3,}" Sep 9 00:02:42.755107 containerd[1510]: time="2025-09-09T00:02:42.755065319Z" level=info msg="StopPodSandbox for \"db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535\"" Sep 9 00:02:42.755426 containerd[1510]: time="2025-09-09T00:02:42.755400719Z" level=info msg="Ensure that sandbox db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535 in task-service has been cleanup successfully" Sep 9 00:02:42.755591 containerd[1510]: time="2025-09-09T00:02:42.755569988Z" level=info msg="TearDown network for sandbox \"db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535\" successfully" Sep 9 00:02:42.755630 containerd[1510]: time="2025-09-09T00:02:42.755589174Z" level=info msg="StopPodSandbox for \"db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535\" returns successfully" Sep 9 00:02:42.760111 containerd[1510]: time="2025-09-09T00:02:42.760059522Z" level=info msg="StopPodSandbox for \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\"" Sep 9 00:02:42.760209 containerd[1510]: time="2025-09-09T00:02:42.760178695Z" level=info msg="TearDown network for sandbox \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\" successfully" Sep 9 00:02:42.760209 containerd[1510]: time="2025-09-09T00:02:42.760190648Z" level=info msg="StopPodSandbox for \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\" returns successfully" Sep 9 00:02:42.760679 containerd[1510]: time="2025-09-09T00:02:42.760632108Z" level=info msg="StopPodSandbox for \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\"" Sep 9 00:02:42.760809 containerd[1510]: time="2025-09-09T00:02:42.760783632Z" level=info msg="TearDown network for sandbox \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\" successfully" Sep 9 00:02:42.760809 containerd[1510]: time="2025-09-09T00:02:42.760802678Z" level=info msg="StopPodSandbox for \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\" returns successfully" Sep 9 00:02:42.761459 containerd[1510]: time="2025-09-09T00:02:42.761428303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79646b996b-cw46r,Uid:abf21d55-a6e7-4fd1-ad4c-e82f7525f680,Namespace:calico-apiserver,Attempt:3,}" Sep 9 00:02:42.973982 systemd[1]: run-netns-cni\x2d240e0c11\x2d3f55\x2d062a\x2d4f17\x2d5afcafc6998f.mount: Deactivated successfully. Sep 9 00:02:42.974141 systemd[1]: run-netns-cni\x2d2e8d100a\x2d2919\x2d7cba\x2d966e\x2d9a619073a550.mount: Deactivated successfully. Sep 9 00:02:42.974239 systemd[1]: run-netns-cni\x2db2886143\x2d2aba\x2dff2c\x2dfce8\x2d364b8ababbd5.mount: Deactivated successfully. Sep 9 00:02:42.974333 systemd[1]: run-netns-cni\x2db2cadfad\x2d70ef\x2de5ed\x2d84c1\x2d3d7839610527.mount: Deactivated successfully. Sep 9 00:02:44.382904 kubelet[2630]: I0909 00:02:44.382850 2630 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:02:44.383598 kubelet[2630]: E0909 00:02:44.383272 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:44.761282 kubelet[2630]: E0909 00:02:44.761249 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:45.527779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount10800726.mount: Deactivated successfully. Sep 9 00:02:45.971816 containerd[1510]: time="2025-09-09T00:02:45.971730556Z" level=error msg="Failed to destroy network for sandbox \"44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:45.972344 containerd[1510]: time="2025-09-09T00:02:45.972274578Z" level=error msg="encountered an error cleaning up failed sandbox \"44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:45.972391 containerd[1510]: time="2025-09-09T00:02:45.972348697Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4p4qb,Uid:6de14301-f214-422d-9e12-0b69107cbf97,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:45.972672 kubelet[2630]: E0909 00:02:45.972616 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:45.973001 kubelet[2630]: E0909 00:02:45.972700 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4p4qb" Sep 9 00:02:45.973001 kubelet[2630]: E0909 00:02:45.972736 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4p4qb" Sep 9 00:02:45.973001 kubelet[2630]: E0909 00:02:45.972795 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4p4qb_kube-system(6de14301-f214-422d-9e12-0b69107cbf97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4p4qb_kube-system(6de14301-f214-422d-9e12-0b69107cbf97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4p4qb" podUID="6de14301-f214-422d-9e12-0b69107cbf97" Sep 9 00:02:46.235060 containerd[1510]: time="2025-09-09T00:02:46.234883881Z" level=error msg="Failed to destroy network for sandbox \"c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:46.235440 containerd[1510]: time="2025-09-09T00:02:46.235386456Z" level=error msg="encountered an error cleaning up failed sandbox \"c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:46.235563 containerd[1510]: time="2025-09-09T00:02:46.235465825Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ct9bv,Uid:9d38a1be-6323-41e6-8564-b477a0eb94a8,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:46.235809 kubelet[2630]: E0909 00:02:46.235756 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:46.235865 kubelet[2630]: E0909 00:02:46.235832 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-ct9bv" Sep 9 00:02:46.235865 kubelet[2630]: E0909 00:02:46.235855 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-ct9bv" Sep 9 00:02:46.235930 kubelet[2630]: E0909 00:02:46.235899 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-ct9bv_calico-system(9d38a1be-6323-41e6-8564-b477a0eb94a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-ct9bv_calico-system(9d38a1be-6323-41e6-8564-b477a0eb94a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-ct9bv" podUID="9d38a1be-6323-41e6-8564-b477a0eb94a8" Sep 9 00:02:46.530923 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897-shm.mount: Deactivated successfully. Sep 9 00:02:46.531088 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033-shm.mount: Deactivated successfully. Sep 9 00:02:46.826312 kubelet[2630]: I0909 00:02:46.824823 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033" Sep 9 00:02:46.826442 containerd[1510]: time="2025-09-09T00:02:46.825341229Z" level=info msg="StopPodSandbox for \"44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033\"" Sep 9 00:02:46.826442 containerd[1510]: time="2025-09-09T00:02:46.825604935Z" level=info msg="Ensure that sandbox 44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033 in task-service has been cleanup successfully" Sep 9 00:02:46.826536 kubelet[2630]: I0909 00:02:46.826440 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897" Sep 9 00:02:46.826836 containerd[1510]: time="2025-09-09T00:02:46.826805298Z" level=info msg="StopPodSandbox for \"c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897\"" Sep 9 00:02:46.827025 containerd[1510]: time="2025-09-09T00:02:46.826997320Z" level=info msg="Ensure that sandbox c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897 in task-service has been cleanup successfully" Sep 9 00:02:46.828208 containerd[1510]: time="2025-09-09T00:02:46.828173858Z" level=info msg="TearDown network for sandbox \"c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897\" successfully" Sep 9 00:02:46.828208 containerd[1510]: time="2025-09-09T00:02:46.828207311Z" level=info msg="StopPodSandbox for \"c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897\" returns successfully" Sep 9 00:02:46.828292 containerd[1510]: time="2025-09-09T00:02:46.828267826Z" level=info msg="TearDown network for sandbox \"44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033\" successfully" Sep 9 00:02:46.828325 containerd[1510]: time="2025-09-09T00:02:46.828288615Z" level=info msg="StopPodSandbox for \"44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033\" returns successfully" Sep 9 00:02:46.828661 systemd[1]: run-netns-cni\x2d6ba82769\x2dfa33\x2d1054\x2dcd7b\x2d2e0b78919d8b.mount: Deactivated successfully. Sep 9 00:02:46.829412 containerd[1510]: time="2025-09-09T00:02:46.829355047Z" level=info msg="StopPodSandbox for \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\"" Sep 9 00:02:46.829597 containerd[1510]: time="2025-09-09T00:02:46.829496993Z" level=info msg="TearDown network for sandbox \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\" successfully" Sep 9 00:02:46.829597 containerd[1510]: time="2025-09-09T00:02:46.829545375Z" level=info msg="StopPodSandbox for \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\" returns successfully" Sep 9 00:02:46.829597 containerd[1510]: time="2025-09-09T00:02:46.829362461Z" level=info msg="StopPodSandbox for \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\"" Sep 9 00:02:46.829719 containerd[1510]: time="2025-09-09T00:02:46.829656183Z" level=info msg="TearDown network for sandbox \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\" successfully" Sep 9 00:02:46.829719 containerd[1510]: time="2025-09-09T00:02:46.829667684Z" level=info msg="StopPodSandbox for \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\" returns successfully" Sep 9 00:02:46.830365 containerd[1510]: time="2025-09-09T00:02:46.830069770Z" level=info msg="StopPodSandbox for \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\"" Sep 9 00:02:46.830365 containerd[1510]: time="2025-09-09T00:02:46.830092753Z" level=info msg="StopPodSandbox for \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\"" Sep 9 00:02:46.830365 containerd[1510]: time="2025-09-09T00:02:46.830163786Z" level=info msg="TearDown network for sandbox \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\" successfully" Sep 9 00:02:46.830365 containerd[1510]: time="2025-09-09T00:02:46.830171190Z" level=info msg="TearDown network for sandbox \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\" successfully" Sep 9 00:02:46.830365 containerd[1510]: time="2025-09-09T00:02:46.830185176Z" level=info msg="StopPodSandbox for \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\" returns successfully" Sep 9 00:02:46.830365 containerd[1510]: time="2025-09-09T00:02:46.830176059Z" level=info msg="StopPodSandbox for \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\" returns successfully" Sep 9 00:02:46.830839 containerd[1510]: time="2025-09-09T00:02:46.830456075Z" level=info msg="StopPodSandbox for \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\"" Sep 9 00:02:46.830839 containerd[1510]: time="2025-09-09T00:02:46.830536576Z" level=info msg="TearDown network for sandbox \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\" successfully" Sep 9 00:02:46.830839 containerd[1510]: time="2025-09-09T00:02:46.830550061Z" level=info msg="StopPodSandbox for \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\" returns successfully" Sep 9 00:02:46.830839 containerd[1510]: time="2025-09-09T00:02:46.830579296Z" level=info msg="StopPodSandbox for \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\"" Sep 9 00:02:46.830839 containerd[1510]: time="2025-09-09T00:02:46.830648015Z" level=info msg="TearDown network for sandbox \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\" successfully" Sep 9 00:02:46.830839 containerd[1510]: time="2025-09-09T00:02:46.830697237Z" level=info msg="StopPodSandbox for \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\" returns successfully" Sep 9 00:02:46.831265 kubelet[2630]: E0909 00:02:46.831112 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:46.831499 containerd[1510]: time="2025-09-09T00:02:46.831373608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ct9bv,Uid:9d38a1be-6323-41e6-8564-b477a0eb94a8,Namespace:calico-system,Attempt:4,}" Sep 9 00:02:46.831499 containerd[1510]: time="2025-09-09T00:02:46.831408824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4p4qb,Uid:6de14301-f214-422d-9e12-0b69107cbf97,Namespace:kube-system,Attempt:4,}" Sep 9 00:02:46.832629 systemd[1]: run-netns-cni\x2dbe3db409\x2d88e3\x2d1f99\x2d2c41\x2d109fb96a2979.mount: Deactivated successfully. Sep 9 00:02:47.205608 containerd[1510]: time="2025-09-09T00:02:47.205557543Z" level=error msg="Failed to destroy network for sandbox \"eaab8f23f5581ebece3209fd884f6ddc229c7a7a278b6bb06f6ba96f12e4cbbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.206512 containerd[1510]: time="2025-09-09T00:02:47.206457212Z" level=error msg="encountered an error cleaning up failed sandbox \"eaab8f23f5581ebece3209fd884f6ddc229c7a7a278b6bb06f6ba96f12e4cbbb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.206643 containerd[1510]: time="2025-09-09T00:02:47.206520711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbs8t,Uid:13bd77bc-168d-4e24-bcab-4df0554bc784,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"eaab8f23f5581ebece3209fd884f6ddc229c7a7a278b6bb06f6ba96f12e4cbbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.206814 kubelet[2630]: E0909 00:02:47.206763 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaab8f23f5581ebece3209fd884f6ddc229c7a7a278b6bb06f6ba96f12e4cbbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.207192 kubelet[2630]: E0909 00:02:47.206841 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaab8f23f5581ebece3209fd884f6ddc229c7a7a278b6bb06f6ba96f12e4cbbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nbs8t" Sep 9 00:02:47.207192 kubelet[2630]: E0909 00:02:47.206874 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaab8f23f5581ebece3209fd884f6ddc229c7a7a278b6bb06f6ba96f12e4cbbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nbs8t" Sep 9 00:02:47.207192 kubelet[2630]: E0909 00:02:47.206933 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nbs8t_calico-system(13bd77bc-168d-4e24-bcab-4df0554bc784)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nbs8t_calico-system(13bd77bc-168d-4e24-bcab-4df0554bc784)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eaab8f23f5581ebece3209fd884f6ddc229c7a7a278b6bb06f6ba96f12e4cbbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nbs8t" podUID="13bd77bc-168d-4e24-bcab-4df0554bc784" Sep 9 00:02:47.207857 containerd[1510]: time="2025-09-09T00:02:47.207825943Z" level=error msg="Failed to destroy network for sandbox \"ec355016b3c360e1428dea342fb572f745b76d8697ce7bbb5384187a1a30294e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.208330 containerd[1510]: time="2025-09-09T00:02:47.208302938Z" level=error msg="encountered an error cleaning up failed sandbox \"ec355016b3c360e1428dea342fb572f745b76d8697ce7bbb5384187a1a30294e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.208400 containerd[1510]: time="2025-09-09T00:02:47.208367580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b854f49bb-nlfqw,Uid:fe12ff7b-73e6-42d0-a348-29ad8070fac9,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ec355016b3c360e1428dea342fb572f745b76d8697ce7bbb5384187a1a30294e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.208517 kubelet[2630]: E0909 00:02:47.208495 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec355016b3c360e1428dea342fb572f745b76d8697ce7bbb5384187a1a30294e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.208647 kubelet[2630]: E0909 00:02:47.208598 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec355016b3c360e1428dea342fb572f745b76d8697ce7bbb5384187a1a30294e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b854f49bb-nlfqw" Sep 9 00:02:47.208647 kubelet[2630]: E0909 00:02:47.208618 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec355016b3c360e1428dea342fb572f745b76d8697ce7bbb5384187a1a30294e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b854f49bb-nlfqw" Sep 9 00:02:47.208647 kubelet[2630]: E0909 00:02:47.208643 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b854f49bb-nlfqw_calico-system(fe12ff7b-73e6-42d0-a348-29ad8070fac9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b854f49bb-nlfqw_calico-system(fe12ff7b-73e6-42d0-a348-29ad8070fac9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec355016b3c360e1428dea342fb572f745b76d8697ce7bbb5384187a1a30294e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b854f49bb-nlfqw" podUID="fe12ff7b-73e6-42d0-a348-29ad8070fac9" Sep 9 00:02:47.284459 containerd[1510]: time="2025-09-09T00:02:47.284390607Z" level=error msg="Failed to destroy network for sandbox \"34c375a85cdaeed11a9a9395614046f7fd8cb7ec5d150322686d5030aba8cb5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.285041 containerd[1510]: time="2025-09-09T00:02:47.285003629Z" level=error msg="encountered an error cleaning up failed sandbox \"34c375a85cdaeed11a9a9395614046f7fd8cb7ec5d150322686d5030aba8cb5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.285663 containerd[1510]: time="2025-09-09T00:02:47.285636156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79646b996b-6z92f,Uid:2b12fb94-9dbc-4031-90bc-b634460d49b8,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"34c375a85cdaeed11a9a9395614046f7fd8cb7ec5d150322686d5030aba8cb5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.286399 kubelet[2630]: E0909 00:02:47.285933 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34c375a85cdaeed11a9a9395614046f7fd8cb7ec5d150322686d5030aba8cb5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.286399 kubelet[2630]: E0909 00:02:47.286019 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34c375a85cdaeed11a9a9395614046f7fd8cb7ec5d150322686d5030aba8cb5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79646b996b-6z92f" Sep 9 00:02:47.286399 kubelet[2630]: E0909 00:02:47.286072 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34c375a85cdaeed11a9a9395614046f7fd8cb7ec5d150322686d5030aba8cb5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79646b996b-6z92f" Sep 9 00:02:47.286550 kubelet[2630]: E0909 00:02:47.286126 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79646b996b-6z92f_calico-apiserver(2b12fb94-9dbc-4031-90bc-b634460d49b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79646b996b-6z92f_calico-apiserver(2b12fb94-9dbc-4031-90bc-b634460d49b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34c375a85cdaeed11a9a9395614046f7fd8cb7ec5d150322686d5030aba8cb5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79646b996b-6z92f" podUID="2b12fb94-9dbc-4031-90bc-b634460d49b8" Sep 9 00:02:47.301127 containerd[1510]: time="2025-09-09T00:02:47.301024843Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:47.305190 containerd[1510]: time="2025-09-09T00:02:47.305099264Z" level=error msg="Failed to destroy network for sandbox \"27644e9d9360d778ba1bb9c79867001b5bfef86b0ee1188b96ae626e5b2b656b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.305776 containerd[1510]: time="2025-09-09T00:02:47.305735388Z" level=error msg="encountered an error cleaning up failed sandbox \"27644e9d9360d778ba1bb9c79867001b5bfef86b0ee1188b96ae626e5b2b656b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.305939 containerd[1510]: time="2025-09-09T00:02:47.305812113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rm7mk,Uid:8b3fc743-df1d-4d9a-822b-01f3200e3e51,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"27644e9d9360d778ba1bb9c79867001b5bfef86b0ee1188b96ae626e5b2b656b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.307066 containerd[1510]: time="2025-09-09T00:02:47.306000416Z" level=error msg="Failed to destroy network for sandbox \"fbf8df7f2f60e92fb3ed7acf80d710cf1c98cd4d9b8f2931852bfd2263feb08a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.307066 containerd[1510]: time="2025-09-09T00:02:47.306509933Z" level=error msg="encountered an error cleaning up failed sandbox \"fbf8df7f2f60e92fb3ed7acf80d710cf1c98cd4d9b8f2931852bfd2263feb08a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.307066 containerd[1510]: time="2025-09-09T00:02:47.306588671Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c5d8b85b-fq8gn,Uid:57b1360c-3eda-441d-822b-cfab485ba025,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"fbf8df7f2f60e92fb3ed7acf80d710cf1c98cd4d9b8f2931852bfd2263feb08a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.307310 kubelet[2630]: E0909 00:02:47.306132 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27644e9d9360d778ba1bb9c79867001b5bfef86b0ee1188b96ae626e5b2b656b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.307310 kubelet[2630]: E0909 00:02:47.306230 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27644e9d9360d778ba1bb9c79867001b5bfef86b0ee1188b96ae626e5b2b656b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rm7mk" Sep 9 00:02:47.307310 kubelet[2630]: E0909 00:02:47.306257 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27644e9d9360d778ba1bb9c79867001b5bfef86b0ee1188b96ae626e5b2b656b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rm7mk" Sep 9 00:02:47.307410 kubelet[2630]: E0909 00:02:47.306316 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rm7mk_kube-system(8b3fc743-df1d-4d9a-822b-01f3200e3e51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rm7mk_kube-system(8b3fc743-df1d-4d9a-822b-01f3200e3e51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27644e9d9360d778ba1bb9c79867001b5bfef86b0ee1188b96ae626e5b2b656b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rm7mk" podUID="8b3fc743-df1d-4d9a-822b-01f3200e3e51" Sep 9 00:02:47.307410 kubelet[2630]: E0909 00:02:47.306825 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbf8df7f2f60e92fb3ed7acf80d710cf1c98cd4d9b8f2931852bfd2263feb08a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.307410 kubelet[2630]: E0909 00:02:47.306897 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbf8df7f2f60e92fb3ed7acf80d710cf1c98cd4d9b8f2931852bfd2263feb08a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68c5d8b85b-fq8gn" Sep 9 00:02:47.307512 kubelet[2630]: E0909 00:02:47.306932 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbf8df7f2f60e92fb3ed7acf80d710cf1c98cd4d9b8f2931852bfd2263feb08a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68c5d8b85b-fq8gn" Sep 9 00:02:47.307512 kubelet[2630]: E0909 00:02:47.306991 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68c5d8b85b-fq8gn_calico-system(57b1360c-3eda-441d-822b-cfab485ba025)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68c5d8b85b-fq8gn_calico-system(57b1360c-3eda-441d-822b-cfab485ba025)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbf8df7f2f60e92fb3ed7acf80d710cf1c98cd4d9b8f2931852bfd2263feb08a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68c5d8b85b-fq8gn" podUID="57b1360c-3eda-441d-822b-cfab485ba025" Sep 9 00:02:47.312648 containerd[1510]: time="2025-09-09T00:02:47.312591002Z" level=error msg="Failed to destroy network for sandbox \"145dab1f893b933faa3ae14b4a921dc0906fa51647d6f4af94bd7949861127bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.313068 containerd[1510]: time="2025-09-09T00:02:47.313008707Z" level=error msg="encountered an error cleaning up failed sandbox \"145dab1f893b933faa3ae14b4a921dc0906fa51647d6f4af94bd7949861127bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.313124 containerd[1510]: time="2025-09-09T00:02:47.313106500Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79646b996b-cw46r,Uid:abf21d55-a6e7-4fd1-ad4c-e82f7525f680,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"145dab1f893b933faa3ae14b4a921dc0906fa51647d6f4af94bd7949861127bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.313421 kubelet[2630]: E0909 00:02:47.313383 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"145dab1f893b933faa3ae14b4a921dc0906fa51647d6f4af94bd7949861127bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.313492 kubelet[2630]: E0909 00:02:47.313446 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"145dab1f893b933faa3ae14b4a921dc0906fa51647d6f4af94bd7949861127bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79646b996b-cw46r" Sep 9 00:02:47.313534 kubelet[2630]: E0909 00:02:47.313468 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"145dab1f893b933faa3ae14b4a921dc0906fa51647d6f4af94bd7949861127bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79646b996b-cw46r" Sep 9 00:02:47.313567 kubelet[2630]: E0909 00:02:47.313538 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79646b996b-cw46r_calico-apiserver(abf21d55-a6e7-4fd1-ad4c-e82f7525f680)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79646b996b-cw46r_calico-apiserver(abf21d55-a6e7-4fd1-ad4c-e82f7525f680)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"145dab1f893b933faa3ae14b4a921dc0906fa51647d6f4af94bd7949861127bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79646b996b-cw46r" podUID="abf21d55-a6e7-4fd1-ad4c-e82f7525f680" Sep 9 00:02:47.316185 containerd[1510]: time="2025-09-09T00:02:47.316117374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 9 00:02:47.319395 containerd[1510]: time="2025-09-09T00:02:47.319349333Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:47.333363 containerd[1510]: time="2025-09-09T00:02:47.331757844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:47.333363 containerd[1510]: time="2025-09-09T00:02:47.332534603Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 7.709685065s" Sep 9 00:02:47.333363 containerd[1510]: time="2025-09-09T00:02:47.332561143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 9 00:02:47.351664 containerd[1510]: time="2025-09-09T00:02:47.351607959Z" level=info msg="CreateContainer within sandbox \"9f73349518c4abbbffb24efed96c271ee8b5f404850aceac744237540363d6ae\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 00:02:47.376771 containerd[1510]: time="2025-09-09T00:02:47.376689438Z" level=error msg="Failed to destroy network for sandbox \"b284f2e2d3c104f1568e625f86aca83bcfc55ac0bc7de454fdd6fa93c5b702cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.377157 containerd[1510]: time="2025-09-09T00:02:47.377131047Z" level=error msg="encountered an error cleaning up failed sandbox \"b284f2e2d3c104f1568e625f86aca83bcfc55ac0bc7de454fdd6fa93c5b702cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.377274 containerd[1510]: time="2025-09-09T00:02:47.377192121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4p4qb,Uid:6de14301-f214-422d-9e12-0b69107cbf97,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b284f2e2d3c104f1568e625f86aca83bcfc55ac0bc7de454fdd6fa93c5b702cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.377572 kubelet[2630]: E0909 00:02:47.377520 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b284f2e2d3c104f1568e625f86aca83bcfc55ac0bc7de454fdd6fa93c5b702cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.377642 kubelet[2630]: E0909 00:02:47.377602 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b284f2e2d3c104f1568e625f86aca83bcfc55ac0bc7de454fdd6fa93c5b702cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4p4qb" Sep 9 00:02:47.377642 kubelet[2630]: E0909 00:02:47.377629 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b284f2e2d3c104f1568e625f86aca83bcfc55ac0bc7de454fdd6fa93c5b702cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4p4qb" Sep 9 00:02:47.377719 kubelet[2630]: E0909 00:02:47.377689 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4p4qb_kube-system(6de14301-f214-422d-9e12-0b69107cbf97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4p4qb_kube-system(6de14301-f214-422d-9e12-0b69107cbf97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b284f2e2d3c104f1568e625f86aca83bcfc55ac0bc7de454fdd6fa93c5b702cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4p4qb" podUID="6de14301-f214-422d-9e12-0b69107cbf97" Sep 9 00:02:47.380149 containerd[1510]: time="2025-09-09T00:02:47.379988703Z" level=error msg="Failed to destroy network for sandbox \"5aae1a11cfbd3f9111d611402aac77bf6a6a6bbdbefaacebc23cbe1b8c85f156\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.380555 containerd[1510]: time="2025-09-09T00:02:47.380497949Z" level=error msg="encountered an error cleaning up failed sandbox \"5aae1a11cfbd3f9111d611402aac77bf6a6a6bbdbefaacebc23cbe1b8c85f156\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.380623 containerd[1510]: time="2025-09-09T00:02:47.380597196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ct9bv,Uid:9d38a1be-6323-41e6-8564-b477a0eb94a8,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"5aae1a11cfbd3f9111d611402aac77bf6a6a6bbdbefaacebc23cbe1b8c85f156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.380853 kubelet[2630]: E0909 00:02:47.380822 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aae1a11cfbd3f9111d611402aac77bf6a6a6bbdbefaacebc23cbe1b8c85f156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:02:47.380928 kubelet[2630]: E0909 00:02:47.380868 2630 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aae1a11cfbd3f9111d611402aac77bf6a6a6bbdbefaacebc23cbe1b8c85f156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-ct9bv" Sep 9 00:02:47.380928 kubelet[2630]: E0909 00:02:47.380889 2630 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aae1a11cfbd3f9111d611402aac77bf6a6a6bbdbefaacebc23cbe1b8c85f156\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-ct9bv" Sep 9 00:02:47.380988 kubelet[2630]: E0909 00:02:47.380931 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-ct9bv_calico-system(9d38a1be-6323-41e6-8564-b477a0eb94a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-ct9bv_calico-system(9d38a1be-6323-41e6-8564-b477a0eb94a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5aae1a11cfbd3f9111d611402aac77bf6a6a6bbdbefaacebc23cbe1b8c85f156\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-ct9bv" podUID="9d38a1be-6323-41e6-8564-b477a0eb94a8" Sep 9 00:02:47.530413 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-145dab1f893b933faa3ae14b4a921dc0906fa51647d6f4af94bd7949861127bb-shm.mount: Deactivated successfully. Sep 9 00:02:47.530525 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-27644e9d9360d778ba1bb9c79867001b5bfef86b0ee1188b96ae626e5b2b656b-shm.mount: Deactivated successfully. Sep 9 00:02:47.530604 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-34c375a85cdaeed11a9a9395614046f7fd8cb7ec5d150322686d5030aba8cb5e-shm.mount: Deactivated successfully. Sep 9 00:02:47.530679 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec355016b3c360e1428dea342fb572f745b76d8697ce7bbb5384187a1a30294e-shm.mount: Deactivated successfully. Sep 9 00:02:47.530764 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eaab8f23f5581ebece3209fd884f6ddc229c7a7a278b6bb06f6ba96f12e4cbbb-shm.mount: Deactivated successfully. Sep 9 00:02:47.831140 kubelet[2630]: I0909 00:02:47.830313 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaab8f23f5581ebece3209fd884f6ddc229c7a7a278b6bb06f6ba96f12e4cbbb" Sep 9 00:02:47.831259 containerd[1510]: time="2025-09-09T00:02:47.830889926Z" level=info msg="StopPodSandbox for \"eaab8f23f5581ebece3209fd884f6ddc229c7a7a278b6bb06f6ba96f12e4cbbb\"" Sep 9 00:02:47.831259 containerd[1510]: time="2025-09-09T00:02:47.831150976Z" level=info msg="Ensure that sandbox eaab8f23f5581ebece3209fd884f6ddc229c7a7a278b6bb06f6ba96f12e4cbbb in task-service has been cleanup successfully" Sep 9 00:02:47.834049 containerd[1510]: time="2025-09-09T00:02:47.831371160Z" level=info msg="TearDown network for sandbox \"eaab8f23f5581ebece3209fd884f6ddc229c7a7a278b6bb06f6ba96f12e4cbbb\" successfully" Sep 9 00:02:47.834049 containerd[1510]: time="2025-09-09T00:02:47.831391788Z" level=info msg="StopPodSandbox for \"eaab8f23f5581ebece3209fd884f6ddc229c7a7a278b6bb06f6ba96f12e4cbbb\" returns successfully" Sep 9 00:02:47.834049 containerd[1510]: time="2025-09-09T00:02:47.832528473Z" level=info msg="StopPodSandbox for \"0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d\"" Sep 9 00:02:47.834049 containerd[1510]: time="2025-09-09T00:02:47.832619634Z" level=info msg="TearDown network for sandbox \"0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d\" successfully" Sep 9 00:02:47.834049 containerd[1510]: time="2025-09-09T00:02:47.832631386Z" level=info msg="StopPodSandbox for \"0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d\" returns successfully" Sep 9 00:02:47.834049 containerd[1510]: time="2025-09-09T00:02:47.832937611Z" level=info msg="StopPodSandbox for \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\"" Sep 9 00:02:47.834049 containerd[1510]: time="2025-09-09T00:02:47.833024414Z" level=info msg="TearDown network for sandbox \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\" successfully" Sep 9 00:02:47.834049 containerd[1510]: time="2025-09-09T00:02:47.833052066Z" level=info msg="StopPodSandbox for \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\" returns successfully" Sep 9 00:02:47.834049 containerd[1510]: time="2025-09-09T00:02:47.833356017Z" level=info msg="StopPodSandbox for \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\"" Sep 9 00:02:47.834049 containerd[1510]: time="2025-09-09T00:02:47.833460653Z" level=info msg="TearDown network for sandbox \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\" successfully" Sep 9 00:02:47.834049 containerd[1510]: time="2025-09-09T00:02:47.833476052Z" level=info msg="StopPodSandbox for \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\" returns successfully" Sep 9 00:02:47.834682 kubelet[2630]: I0909 00:02:47.833665 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec355016b3c360e1428dea342fb572f745b76d8697ce7bbb5384187a1a30294e" Sep 9 00:02:47.834731 containerd[1510]: time="2025-09-09T00:02:47.834113539Z" level=info msg="StopPodSandbox for \"ec355016b3c360e1428dea342fb572f745b76d8697ce7bbb5384187a1a30294e\"" Sep 9 00:02:47.834731 containerd[1510]: time="2025-09-09T00:02:47.834237162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbs8t,Uid:13bd77bc-168d-4e24-bcab-4df0554bc784,Namespace:calico-system,Attempt:4,}" Sep 9 00:02:47.834731 containerd[1510]: time="2025-09-09T00:02:47.834309738Z" level=info msg="Ensure that sandbox ec355016b3c360e1428dea342fb572f745b76d8697ce7bbb5384187a1a30294e in task-service has been cleanup successfully" Sep 9 00:02:47.834731 containerd[1510]: time="2025-09-09T00:02:47.834503541Z" level=info msg="TearDown network for sandbox \"ec355016b3c360e1428dea342fb572f745b76d8697ce7bbb5384187a1a30294e\" successfully" Sep 9 00:02:47.834731 containerd[1510]: time="2025-09-09T00:02:47.834517578Z" level=info msg="StopPodSandbox for \"ec355016b3c360e1428dea342fb572f745b76d8697ce7bbb5384187a1a30294e\" returns successfully" Sep 9 00:02:47.834963 containerd[1510]: time="2025-09-09T00:02:47.834873286Z" level=info msg="StopPodSandbox for \"8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e\"" Sep 9 00:02:47.835023 containerd[1510]: time="2025-09-09T00:02:47.835003680Z" level=info msg="TearDown network for sandbox \"8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e\" successfully" Sep 9 00:02:47.835132 containerd[1510]: time="2025-09-09T00:02:47.835108236Z" level=info msg="StopPodSandbox for \"8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e\" returns successfully" Sep 9 00:02:47.835167 systemd[1]: run-netns-cni\x2d016d12d0\x2decaa\x2dbbff\x2d311a\x2d3a39cffbd458.mount: Deactivated successfully. Sep 9 00:02:47.835941 containerd[1510]: time="2025-09-09T00:02:47.835917226Z" level=info msg="StopPodSandbox for \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\"" Sep 9 00:02:47.836049 containerd[1510]: time="2025-09-09T00:02:47.836016492Z" level=info msg="TearDown network for sandbox \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\" successfully" Sep 9 00:02:47.836100 containerd[1510]: time="2025-09-09T00:02:47.836075904Z" level=info msg="StopPodSandbox for \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\" returns successfully" Sep 9 00:02:47.837135 containerd[1510]: time="2025-09-09T00:02:47.837110667Z" level=info msg="StopPodSandbox for \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\"" Sep 9 00:02:47.837135 containerd[1510]: time="2025-09-09T00:02:47.837234971Z" level=info msg="TearDown network for sandbox \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\" successfully" Sep 9 00:02:47.837135 containerd[1510]: time="2025-09-09T00:02:47.837250820Z" level=info msg="StopPodSandbox for \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\" returns successfully" Sep 9 00:02:47.837702 containerd[1510]: time="2025-09-09T00:02:47.837677131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b854f49bb-nlfqw,Uid:fe12ff7b-73e6-42d0-a348-29ad8070fac9,Namespace:calico-system,Attempt:4,}" Sep 9 00:02:47.838167 kubelet[2630]: I0909 00:02:47.838146 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27644e9d9360d778ba1bb9c79867001b5bfef86b0ee1188b96ae626e5b2b656b" Sep 9 00:02:47.838578 containerd[1510]: time="2025-09-09T00:02:47.838535202Z" level=info msg="StopPodSandbox for \"27644e9d9360d778ba1bb9c79867001b5bfef86b0ee1188b96ae626e5b2b656b\"" Sep 9 00:02:47.838808 containerd[1510]: time="2025-09-09T00:02:47.838732022Z" level=info msg="Ensure that sandbox 27644e9d9360d778ba1bb9c79867001b5bfef86b0ee1188b96ae626e5b2b656b in task-service has been cleanup successfully" Sep 9 00:02:47.838819 systemd[1]: run-netns-cni\x2d045795c4\x2da8f6\x2dfea6\x2d6d98\x2d7c115399185c.mount: Deactivated successfully. Sep 9 00:02:47.838939 containerd[1510]: time="2025-09-09T00:02:47.838918020Z" level=info msg="TearDown network for sandbox \"27644e9d9360d778ba1bb9c79867001b5bfef86b0ee1188b96ae626e5b2b656b\" successfully" Sep 9 00:02:47.839162 containerd[1510]: time="2025-09-09T00:02:47.838983003Z" level=info msg="StopPodSandbox for \"27644e9d9360d778ba1bb9c79867001b5bfef86b0ee1188b96ae626e5b2b656b\" returns successfully" Sep 9 00:02:47.839706 containerd[1510]: time="2025-09-09T00:02:47.839686253Z" level=info msg="StopPodSandbox for \"5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8\"" Sep 9 00:02:47.839891 containerd[1510]: time="2025-09-09T00:02:47.839874166Z" level=info msg="TearDown network for sandbox \"5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8\" successfully" Sep 9 00:02:47.839994 containerd[1510]: time="2025-09-09T00:02:47.839964797Z" level=info msg="StopPodSandbox for \"5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8\" returns successfully" Sep 9 00:02:47.840408 containerd[1510]: time="2025-09-09T00:02:47.840262465Z" level=info msg="StopPodSandbox for \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\"" Sep 9 00:02:47.840408 containerd[1510]: time="2025-09-09T00:02:47.840357664Z" level=info msg="TearDown network for sandbox \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\" successfully" Sep 9 00:02:47.840408 containerd[1510]: time="2025-09-09T00:02:47.840372021Z" level=info msg="StopPodSandbox for \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\" returns successfully" Sep 9 00:02:47.840568 containerd[1510]: time="2025-09-09T00:02:47.840547270Z" level=info msg="StopPodSandbox for \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\"" Sep 9 00:02:47.840651 containerd[1510]: time="2025-09-09T00:02:47.840635937Z" level=info msg="TearDown network for sandbox \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\" successfully" Sep 9 00:02:47.840687 containerd[1510]: time="2025-09-09T00:02:47.840649693Z" level=info msg="StopPodSandbox for \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\" returns successfully" Sep 9 00:02:47.840727 kubelet[2630]: I0909 00:02:47.840666 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b284f2e2d3c104f1568e625f86aca83bcfc55ac0bc7de454fdd6fa93c5b702cf" Sep 9 00:02:47.841046 kubelet[2630]: E0909 00:02:47.841001 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:47.841108 containerd[1510]: time="2025-09-09T00:02:47.841049253Z" level=info msg="StopPodSandbox for \"b284f2e2d3c104f1568e625f86aca83bcfc55ac0bc7de454fdd6fa93c5b702cf\"" Sep 9 00:02:47.841282 containerd[1510]: time="2025-09-09T00:02:47.841254358Z" level=info msg="Ensure that sandbox b284f2e2d3c104f1568e625f86aca83bcfc55ac0bc7de454fdd6fa93c5b702cf in task-service has been cleanup successfully" Sep 9 00:02:47.841551 containerd[1510]: time="2025-09-09T00:02:47.841499018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rm7mk,Uid:8b3fc743-df1d-4d9a-822b-01f3200e3e51,Namespace:kube-system,Attempt:4,}" Sep 9 00:02:47.841743 containerd[1510]: time="2025-09-09T00:02:47.841717187Z" level=info msg="TearDown network for sandbox \"b284f2e2d3c104f1568e625f86aca83bcfc55ac0bc7de454fdd6fa93c5b702cf\" successfully" Sep 9 00:02:47.841810 containerd[1510]: time="2025-09-09T00:02:47.841757462Z" level=info msg="StopPodSandbox for \"b284f2e2d3c104f1568e625f86aca83bcfc55ac0bc7de454fdd6fa93c5b702cf\" returns successfully" Sep 9 00:02:47.841983 containerd[1510]: time="2025-09-09T00:02:47.841959963Z" level=info msg="StopPodSandbox for \"44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033\"" Sep 9 00:02:47.842113 containerd[1510]: time="2025-09-09T00:02:47.842093163Z" level=info msg="TearDown network for sandbox \"44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033\" successfully" Sep 9 00:02:47.842113 containerd[1510]: time="2025-09-09T00:02:47.842110045Z" level=info msg="StopPodSandbox for \"44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033\" returns successfully" Sep 9 00:02:47.842420 containerd[1510]: time="2025-09-09T00:02:47.842395100Z" level=info msg="StopPodSandbox for \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\"" Sep 9 00:02:47.842536 containerd[1510]: time="2025-09-09T00:02:47.842515146Z" level=info msg="TearDown network for sandbox \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\" successfully" Sep 9 00:02:47.842598 containerd[1510]: time="2025-09-09T00:02:47.842534933Z" level=info msg="StopPodSandbox for \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\" returns successfully" Sep 9 00:02:47.842907 containerd[1510]: time="2025-09-09T00:02:47.842742453Z" level=info msg="StopPodSandbox for \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\"" Sep 9 00:02:47.842907 containerd[1510]: time="2025-09-09T00:02:47.842830107Z" level=info msg="TearDown network for sandbox \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\" successfully" Sep 9 00:02:47.842907 containerd[1510]: time="2025-09-09T00:02:47.842843583Z" level=info msg="StopPodSandbox for \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\" returns successfully" Sep 9 00:02:47.843104 containerd[1510]: time="2025-09-09T00:02:47.843082781Z" level=info msg="StopPodSandbox for \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\"" Sep 9 00:02:47.843170 containerd[1510]: time="2025-09-09T00:02:47.843156450Z" level=info msg="TearDown network for sandbox \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\" successfully" Sep 9 00:02:47.843170 containerd[1510]: time="2025-09-09T00:02:47.843167560Z" level=info msg="StopPodSandbox for \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\" returns successfully" Sep 9 00:02:47.843401 kubelet[2630]: E0909 00:02:47.843301 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:47.843401 kubelet[2630]: I0909 00:02:47.843382 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34c375a85cdaeed11a9a9395614046f7fd8cb7ec5d150322686d5030aba8cb5e" Sep 9 00:02:47.843559 containerd[1510]: time="2025-09-09T00:02:47.843530773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4p4qb,Uid:6de14301-f214-422d-9e12-0b69107cbf97,Namespace:kube-system,Attempt:5,}" Sep 9 00:02:47.843959 containerd[1510]: time="2025-09-09T00:02:47.843833111Z" level=info msg="StopPodSandbox for \"34c375a85cdaeed11a9a9395614046f7fd8cb7ec5d150322686d5030aba8cb5e\"" Sep 9 00:02:47.844053 containerd[1510]: time="2025-09-09T00:02:47.844013389Z" level=info msg="Ensure that sandbox 34c375a85cdaeed11a9a9395614046f7fd8cb7ec5d150322686d5030aba8cb5e in task-service has been cleanup successfully" Sep 9 00:02:47.844306 containerd[1510]: time="2025-09-09T00:02:47.844222402Z" level=info msg="TearDown network for sandbox \"34c375a85cdaeed11a9a9395614046f7fd8cb7ec5d150322686d5030aba8cb5e\" successfully" Sep 9 00:02:47.844306 containerd[1510]: time="2025-09-09T00:02:47.844242910Z" level=info msg="StopPodSandbox for \"34c375a85cdaeed11a9a9395614046f7fd8cb7ec5d150322686d5030aba8cb5e\" returns successfully" Sep 9 00:02:47.844530 containerd[1510]: time="2025-09-09T00:02:47.844506555Z" level=info msg="StopPodSandbox for \"cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da\"" Sep 9 00:02:47.844749 containerd[1510]: time="2025-09-09T00:02:47.844727350Z" level=info msg="TearDown network for sandbox \"cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da\" successfully" Sep 9 00:02:47.844749 containerd[1510]: time="2025-09-09T00:02:47.844747508Z" level=info msg="StopPodSandbox for \"cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da\" returns successfully" Sep 9 00:02:47.845149 containerd[1510]: time="2025-09-09T00:02:47.844969905Z" level=info msg="StopPodSandbox for \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\"" Sep 9 00:02:47.845149 containerd[1510]: time="2025-09-09T00:02:47.845079791Z" level=info msg="TearDown network for sandbox \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\" successfully" Sep 9 00:02:47.845149 containerd[1510]: time="2025-09-09T00:02:47.845094760Z" level=info msg="StopPodSandbox for \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\" returns successfully" Sep 9 00:02:47.845265 kubelet[2630]: I0909 00:02:47.844999 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="145dab1f893b933faa3ae14b4a921dc0906fa51647d6f4af94bd7949861127bb" Sep 9 00:02:47.845469 containerd[1510]: time="2025-09-09T00:02:47.845429428Z" level=info msg="StopPodSandbox for \"145dab1f893b933faa3ae14b4a921dc0906fa51647d6f4af94bd7949861127bb\"" Sep 9 00:02:47.845600 containerd[1510]: time="2025-09-09T00:02:47.845584861Z" level=info msg="Ensure that sandbox 145dab1f893b933faa3ae14b4a921dc0906fa51647d6f4af94bd7949861127bb in task-service has been cleanup successfully" Sep 9 00:02:47.845788 containerd[1510]: time="2025-09-09T00:02:47.845753507Z" level=info msg="StopPodSandbox for \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\"" Sep 9 00:02:47.845878 containerd[1510]: time="2025-09-09T00:02:47.845859065Z" level=info msg="TearDown network for sandbox \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\" successfully" Sep 9 00:02:47.845918 containerd[1510]: time="2025-09-09T00:02:47.845868923Z" level=info msg="TearDown network for sandbox \"145dab1f893b933faa3ae14b4a921dc0906fa51647d6f4af94bd7949861127bb\" successfully" Sep 9 00:02:47.845918 containerd[1510]: time="2025-09-09T00:02:47.845893219Z" level=info msg="StopPodSandbox for \"145dab1f893b933faa3ae14b4a921dc0906fa51647d6f4af94bd7949861127bb\" returns successfully" Sep 9 00:02:47.845991 containerd[1510]: time="2025-09-09T00:02:47.845875596Z" level=info msg="StopPodSandbox for \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\" returns successfully" Sep 9 00:02:47.846241 containerd[1510]: time="2025-09-09T00:02:47.846219863Z" level=info msg="StopPodSandbox for \"db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535\"" Sep 9 00:02:47.846288 containerd[1510]: time="2025-09-09T00:02:47.846240251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79646b996b-6z92f,Uid:2b12fb94-9dbc-4031-90bc-b634460d49b8,Namespace:calico-apiserver,Attempt:4,}" Sep 9 00:02:47.846338 containerd[1510]: time="2025-09-09T00:02:47.846298721Z" level=info msg="TearDown network for sandbox \"db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535\" successfully" Sep 9 00:02:47.846338 containerd[1510]: time="2025-09-09T00:02:47.846309060Z" level=info msg="StopPodSandbox for \"db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535\" returns successfully" Sep 9 00:02:47.846548 containerd[1510]: time="2025-09-09T00:02:47.846531558Z" level=info msg="StopPodSandbox for \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\"" Sep 9 00:02:47.846638 containerd[1510]: time="2025-09-09T00:02:47.846598854Z" level=info msg="TearDown network for sandbox \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\" successfully" Sep 9 00:02:47.846638 containerd[1510]: time="2025-09-09T00:02:47.846607340Z" level=info msg="StopPodSandbox for \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\" returns successfully" Sep 9 00:02:47.846979 containerd[1510]: time="2025-09-09T00:02:47.846825520Z" level=info msg="StopPodSandbox for \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\"" Sep 9 00:02:47.846979 containerd[1510]: time="2025-09-09T00:02:47.846913575Z" level=info msg="TearDown network for sandbox \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\" successfully" Sep 9 00:02:47.846979 containerd[1510]: time="2025-09-09T00:02:47.846926028Z" level=info msg="StopPodSandbox for \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\" returns successfully" Sep 9 00:02:47.847401 containerd[1510]: time="2025-09-09T00:02:47.847243505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79646b996b-cw46r,Uid:abf21d55-a6e7-4fd1-ad4c-e82f7525f680,Namespace:calico-apiserver,Attempt:4,}" Sep 9 00:02:47.847659 kubelet[2630]: I0909 00:02:47.847644 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aae1a11cfbd3f9111d611402aac77bf6a6a6bbdbefaacebc23cbe1b8c85f156" Sep 9 00:02:47.847962 containerd[1510]: time="2025-09-09T00:02:47.847931155Z" level=info msg="StopPodSandbox for \"5aae1a11cfbd3f9111d611402aac77bf6a6a6bbdbefaacebc23cbe1b8c85f156\"" Sep 9 00:02:47.848174 containerd[1510]: time="2025-09-09T00:02:47.848143294Z" level=info msg="Ensure that sandbox 5aae1a11cfbd3f9111d611402aac77bf6a6a6bbdbefaacebc23cbe1b8c85f156 in task-service has been cleanup successfully" Sep 9 00:02:47.848391 containerd[1510]: time="2025-09-09T00:02:47.848312983Z" level=info msg="TearDown network for sandbox \"5aae1a11cfbd3f9111d611402aac77bf6a6a6bbdbefaacebc23cbe1b8c85f156\" successfully" Sep 9 00:02:47.848391 containerd[1510]: time="2025-09-09T00:02:47.848334303Z" level=info msg="StopPodSandbox for \"5aae1a11cfbd3f9111d611402aac77bf6a6a6bbdbefaacebc23cbe1b8c85f156\" returns successfully" Sep 9 00:02:47.848682 containerd[1510]: time="2025-09-09T00:02:47.848666026Z" level=info msg="StopPodSandbox for \"c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897\"" Sep 9 00:02:47.848795 containerd[1510]: time="2025-09-09T00:02:47.848781843Z" level=info msg="TearDown network for sandbox \"c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897\" successfully" Sep 9 00:02:47.848862 containerd[1510]: time="2025-09-09T00:02:47.848844551Z" level=info msg="StopPodSandbox for \"c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897\" returns successfully" Sep 9 00:02:47.849238 containerd[1510]: time="2025-09-09T00:02:47.849212853Z" level=info msg="StopPodSandbox for \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\"" Sep 9 00:02:47.849319 kubelet[2630]: I0909 00:02:47.849300 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbf8df7f2f60e92fb3ed7acf80d710cf1c98cd4d9b8f2931852bfd2263feb08a" Sep 9 00:02:47.849376 containerd[1510]: time="2025-09-09T00:02:47.849305486Z" level=info msg="TearDown network for sandbox \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\" successfully" Sep 9 00:02:47.849376 containerd[1510]: time="2025-09-09T00:02:47.849319082Z" level=info msg="StopPodSandbox for \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\" returns successfully" Sep 9 00:02:47.849635 containerd[1510]: time="2025-09-09T00:02:47.849595351Z" level=info msg="StopPodSandbox for \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\"" Sep 9 00:02:47.849698 containerd[1510]: time="2025-09-09T00:02:47.849679238Z" level=info msg="StopPodSandbox for \"fbf8df7f2f60e92fb3ed7acf80d710cf1c98cd4d9b8f2931852bfd2263feb08a\"" Sep 9 00:02:47.849778 containerd[1510]: time="2025-09-09T00:02:47.849683667Z" level=info msg="TearDown network for sandbox \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\" successfully" Sep 9 00:02:47.849811 containerd[1510]: time="2025-09-09T00:02:47.849776661Z" level=info msg="StopPodSandbox for \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\" returns successfully" Sep 9 00:02:47.849811 containerd[1510]: time="2025-09-09T00:02:47.849806908Z" level=info msg="Ensure that sandbox fbf8df7f2f60e92fb3ed7acf80d710cf1c98cd4d9b8f2931852bfd2263feb08a in task-service has been cleanup successfully" Sep 9 00:02:47.850069 containerd[1510]: time="2025-09-09T00:02:47.850021371Z" level=info msg="StopPodSandbox for \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\"" Sep 9 00:02:47.850157 containerd[1510]: time="2025-09-09T00:02:47.850118533Z" level=info msg="TearDown network for sandbox \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\" successfully" Sep 9 00:02:47.850157 containerd[1510]: time="2025-09-09T00:02:47.850128743Z" level=info msg="StopPodSandbox for \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\" returns successfully" Sep 9 00:02:47.850256 containerd[1510]: time="2025-09-09T00:02:47.850186751Z" level=info msg="TearDown network for sandbox \"fbf8df7f2f60e92fb3ed7acf80d710cf1c98cd4d9b8f2931852bfd2263feb08a\" successfully" Sep 9 00:02:47.850256 containerd[1510]: time="2025-09-09T00:02:47.850195398Z" level=info msg="StopPodSandbox for \"fbf8df7f2f60e92fb3ed7acf80d710cf1c98cd4d9b8f2931852bfd2263feb08a\" returns successfully" Sep 9 00:02:47.850429 containerd[1510]: time="2025-09-09T00:02:47.850411744Z" level=info msg="StopPodSandbox for \"e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661\"" Sep 9 00:02:47.850486 containerd[1510]: time="2025-09-09T00:02:47.850477939Z" level=info msg="TearDown network for sandbox \"e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661\" successfully" Sep 9 00:02:47.850518 containerd[1510]: time="2025-09-09T00:02:47.850485593Z" level=info msg="StopPodSandbox for \"e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661\" returns successfully" Sep 9 00:02:47.850549 containerd[1510]: time="2025-09-09T00:02:47.850521170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ct9bv,Uid:9d38a1be-6323-41e6-8564-b477a0eb94a8,Namespace:calico-system,Attempt:5,}" Sep 9 00:02:47.850875 containerd[1510]: time="2025-09-09T00:02:47.850853815Z" level=info msg="StopPodSandbox for \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\"" Sep 9 00:02:47.850968 containerd[1510]: time="2025-09-09T00:02:47.850943413Z" level=info msg="TearDown network for sandbox \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\" successfully" Sep 9 00:02:47.851007 containerd[1510]: time="2025-09-09T00:02:47.850966726Z" level=info msg="StopPodSandbox for \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\" returns successfully" Sep 9 00:02:47.851240 containerd[1510]: time="2025-09-09T00:02:47.851218188Z" level=info msg="StopPodSandbox for \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\"" Sep 9 00:02:47.851323 containerd[1510]: time="2025-09-09T00:02:47.851304811Z" level=info msg="TearDown network for sandbox \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\" successfully" Sep 9 00:02:47.851375 containerd[1510]: time="2025-09-09T00:02:47.851319930Z" level=info msg="StopPodSandbox for \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\" returns successfully" Sep 9 00:02:47.851690 containerd[1510]: time="2025-09-09T00:02:47.851665137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c5d8b85b-fq8gn,Uid:57b1360c-3eda-441d-822b-cfab485ba025,Namespace:calico-system,Attempt:4,}" Sep 9 00:02:48.013939 containerd[1510]: time="2025-09-09T00:02:48.013864258Z" level=info msg="CreateContainer within sandbox \"9f73349518c4abbbffb24efed96c271ee8b5f404850aceac744237540363d6ae\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8bab6dc0b213c0786d498a7e3bb66dc0c45b64b8b28e4d9ab646ce062f1a6bf7\"" Sep 9 00:02:48.014506 containerd[1510]: time="2025-09-09T00:02:48.014480224Z" level=info msg="StartContainer for \"8bab6dc0b213c0786d498a7e3bb66dc0c45b64b8b28e4d9ab646ce062f1a6bf7\"" Sep 9 00:02:48.075229 systemd[1]: Started cri-containerd-8bab6dc0b213c0786d498a7e3bb66dc0c45b64b8b28e4d9ab646ce062f1a6bf7.scope - libcontainer container 8bab6dc0b213c0786d498a7e3bb66dc0c45b64b8b28e4d9ab646ce062f1a6bf7. Sep 9 00:02:48.188903 containerd[1510]: time="2025-09-09T00:02:48.188841130Z" level=info msg="StartContainer for \"8bab6dc0b213c0786d498a7e3bb66dc0c45b64b8b28e4d9ab646ce062f1a6bf7\" returns successfully" Sep 9 00:02:48.221464 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 00:02:48.221750 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 00:02:48.528831 systemd[1]: run-netns-cni\x2d9df4378f\x2d7285\x2d75dc\x2df8f7\x2d71d746ffe314.mount: Deactivated successfully. Sep 9 00:02:48.528959 systemd[1]: run-netns-cni\x2df9c2d0a3\x2de689\x2d45a2\x2d3733\x2dbf0e07bda5c3.mount: Deactivated successfully. Sep 9 00:02:48.529048 systemd[1]: run-netns-cni\x2db22d15a7\x2d2179\x2dd1dc\x2d66a3\x2dd949eea7bfd9.mount: Deactivated successfully. Sep 9 00:02:48.529125 systemd[1]: run-netns-cni\x2d157e673b\x2dd2fa\x2d730a\x2d55bd\x2d73b711d00663.mount: Deactivated successfully. Sep 9 00:02:48.529195 systemd[1]: run-netns-cni\x2d74d7b4b7\x2d47a7\x2d8547\x2dba08\x2df0640d3bef21.mount: Deactivated successfully. Sep 9 00:02:48.529340 systemd[1]: run-netns-cni\x2de2a2f4e2\x2dba9b\x2d15c0\x2d4bbf\x2d11b402893bd2.mount: Deactivated successfully. Sep 9 00:02:49.032496 kubelet[2630]: I0909 00:02:49.032424 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-m2crl" podStartSLOduration=2.862169438 podStartE2EDuration="23.032409254s" podCreationTimestamp="2025-09-09 00:02:26 +0000 UTC" firstStartedPulling="2025-09-09 00:02:27.164193613 +0000 UTC m=+17.831764434" lastFinishedPulling="2025-09-09 00:02:47.334433429 +0000 UTC m=+38.002004250" observedRunningTime="2025-09-09 00:02:49.031642766 +0000 UTC m=+39.699213587" watchObservedRunningTime="2025-09-09 00:02:49.032409254 +0000 UTC m=+39.699980075" Sep 9 00:02:50.528915 systemd-networkd[1423]: calidc37378c79f: Link UP Sep 9 00:02:50.529344 systemd-networkd[1423]: calidc37378c79f: Gained carrier Sep 9 00:02:50.550691 containerd[1510]: 2025-09-09 00:02:50.184 [INFO][4701] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:02:50.550691 containerd[1510]: 2025-09-09 00:02:50.208 [INFO][4701] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79646b996b--6z92f-eth0 calico-apiserver-79646b996b- calico-apiserver 2b12fb94-9dbc-4031-90bc-b634460d49b8 819 0 2025-09-09 00:02:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79646b996b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79646b996b-6z92f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidc37378c79f [] [] }} ContainerID="76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1" Namespace="calico-apiserver" Pod="calico-apiserver-79646b996b-6z92f" WorkloadEndpoint="localhost-k8s-calico--apiserver--79646b996b--6z92f-" Sep 9 00:02:50.550691 containerd[1510]: 2025-09-09 00:02:50.209 [INFO][4701] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1" Namespace="calico-apiserver" Pod="calico-apiserver-79646b996b-6z92f" WorkloadEndpoint="localhost-k8s-calico--apiserver--79646b996b--6z92f-eth0" Sep 9 00:02:50.550691 containerd[1510]: 2025-09-09 00:02:50.316 [INFO][4794] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1" HandleID="k8s-pod-network.76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1" Workload="localhost-k8s-calico--apiserver--79646b996b--6z92f-eth0" Sep 9 00:02:50.550691 containerd[1510]: 2025-09-09 00:02:50.316 [INFO][4794] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1" HandleID="k8s-pod-network.76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1" Workload="localhost-k8s-calico--apiserver--79646b996b--6z92f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040e140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79646b996b-6z92f", "timestamp":"2025-09-09 00:02:50.316291174 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:02:50.550691 containerd[1510]: 2025-09-09 00:02:50.316 [INFO][4794] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:02:50.550691 containerd[1510]: 2025-09-09 00:02:50.317 [INFO][4794] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:02:50.550691 containerd[1510]: 2025-09-09 00:02:50.317 [INFO][4794] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:02:50.550691 containerd[1510]: 2025-09-09 00:02:50.328 [INFO][4794] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1" host="localhost" Sep 9 00:02:50.550691 containerd[1510]: 2025-09-09 00:02:50.337 [INFO][4794] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:02:50.550691 containerd[1510]: 2025-09-09 00:02:50.345 [INFO][4794] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:02:50.550691 containerd[1510]: 2025-09-09 00:02:50.347 [INFO][4794] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:02:50.550691 containerd[1510]: 2025-09-09 00:02:50.349 [INFO][4794] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:02:50.550691 containerd[1510]: 2025-09-09 00:02:50.349 [INFO][4794] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1" host="localhost" Sep 9 00:02:50.550691 containerd[1510]: 2025-09-09 00:02:50.351 [INFO][4794] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1 Sep 9 00:02:50.550691 containerd[1510]: 2025-09-09 00:02:50.360 [INFO][4794] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1" host="localhost" Sep 9 00:02:50.550691 containerd[1510]: 2025-09-09 00:02:50.516 [INFO][4794] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1" host="localhost" Sep 9 00:02:50.550691 containerd[1510]: 2025-09-09 00:02:50.516 [INFO][4794] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1" host="localhost" Sep 9 00:02:50.550691 containerd[1510]: 2025-09-09 00:02:50.516 [INFO][4794] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:02:50.550691 containerd[1510]: 2025-09-09 00:02:50.516 [INFO][4794] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1" HandleID="k8s-pod-network.76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1" Workload="localhost-k8s-calico--apiserver--79646b996b--6z92f-eth0" Sep 9 00:02:50.551743 containerd[1510]: 2025-09-09 00:02:50.519 [INFO][4701] cni-plugin/k8s.go 418: Populated endpoint ContainerID="76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1" Namespace="calico-apiserver" Pod="calico-apiserver-79646b996b-6z92f" WorkloadEndpoint="localhost-k8s-calico--apiserver--79646b996b--6z92f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79646b996b--6z92f-eth0", GenerateName:"calico-apiserver-79646b996b-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b12fb94-9dbc-4031-90bc-b634460d49b8", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 2, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79646b996b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79646b996b-6z92f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc37378c79f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:02:50.551743 containerd[1510]: 2025-09-09 00:02:50.520 [INFO][4701] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1" Namespace="calico-apiserver" Pod="calico-apiserver-79646b996b-6z92f" WorkloadEndpoint="localhost-k8s-calico--apiserver--79646b996b--6z92f-eth0" Sep 9 00:02:50.551743 containerd[1510]: 2025-09-09 00:02:50.520 [INFO][4701] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc37378c79f ContainerID="76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1" Namespace="calico-apiserver" Pod="calico-apiserver-79646b996b-6z92f" WorkloadEndpoint="localhost-k8s-calico--apiserver--79646b996b--6z92f-eth0" Sep 9 00:02:50.551743 containerd[1510]: 2025-09-09 00:02:50.529 [INFO][4701] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1" Namespace="calico-apiserver" Pod="calico-apiserver-79646b996b-6z92f" WorkloadEndpoint="localhost-k8s-calico--apiserver--79646b996b--6z92f-eth0" Sep 9 00:02:50.551743 containerd[1510]: 2025-09-09 00:02:50.529 [INFO][4701] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1" Namespace="calico-apiserver" Pod="calico-apiserver-79646b996b-6z92f" WorkloadEndpoint="localhost-k8s-calico--apiserver--79646b996b--6z92f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79646b996b--6z92f-eth0", GenerateName:"calico-apiserver-79646b996b-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b12fb94-9dbc-4031-90bc-b634460d49b8", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 2, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79646b996b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1", Pod:"calico-apiserver-79646b996b-6z92f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc37378c79f", MAC:"3e:4f:b2:a5:78:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:02:50.551743 containerd[1510]: 2025-09-09 00:02:50.547 [INFO][4701] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1" Namespace="calico-apiserver" Pod="calico-apiserver-79646b996b-6z92f" WorkloadEndpoint="localhost-k8s-calico--apiserver--79646b996b--6z92f-eth0" Sep 9 00:02:50.583324 systemd[1]: Started sshd@7-10.0.0.133:22-10.0.0.1:36342.service - OpenSSH per-connection server daemon (10.0.0.1:36342). Sep 9 00:02:50.587059 systemd-networkd[1423]: cali7ca851bfd66: Link UP Sep 9 00:02:50.587305 systemd-networkd[1423]: cali7ca851bfd66: Gained carrier Sep 9 00:02:50.605599 containerd[1510]: 2025-09-09 00:02:50.200 [INFO][4735] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:02:50.605599 containerd[1510]: 2025-09-09 00:02:50.223 [INFO][4735] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--68c5d8b85b--fq8gn-eth0 calico-kube-controllers-68c5d8b85b- calico-system 57b1360c-3eda-441d-822b-cfab485ba025 820 0 2025-09-09 00:02:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68c5d8b85b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-68c5d8b85b-fq8gn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7ca851bfd66 [] [] }} ContainerID="e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6" Namespace="calico-system" Pod="calico-kube-controllers-68c5d8b85b-fq8gn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68c5d8b85b--fq8gn-" Sep 9 00:02:50.605599 containerd[1510]: 2025-09-09 00:02:50.223 [INFO][4735] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6" Namespace="calico-system" Pod="calico-kube-controllers-68c5d8b85b-fq8gn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68c5d8b85b--fq8gn-eth0" Sep 9 00:02:50.605599 containerd[1510]: 2025-09-09 00:02:50.315 [INFO][4805] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6" HandleID="k8s-pod-network.e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6" Workload="localhost-k8s-calico--kube--controllers--68c5d8b85b--fq8gn-eth0" Sep 9 00:02:50.605599 containerd[1510]: 2025-09-09 00:02:50.317 [INFO][4805] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6" HandleID="k8s-pod-network.e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6" Workload="localhost-k8s-calico--kube--controllers--68c5d8b85b--fq8gn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000143180), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-68c5d8b85b-fq8gn", "timestamp":"2025-09-09 00:02:50.315697749 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:02:50.605599 containerd[1510]: 2025-09-09 00:02:50.317 [INFO][4805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:02:50.605599 containerd[1510]: 2025-09-09 00:02:50.516 [INFO][4805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:02:50.605599 containerd[1510]: 2025-09-09 00:02:50.516 [INFO][4805] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:02:50.605599 containerd[1510]: 2025-09-09 00:02:50.525 [INFO][4805] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6" host="localhost" Sep 9 00:02:50.605599 containerd[1510]: 2025-09-09 00:02:50.531 [INFO][4805] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:02:50.605599 containerd[1510]: 2025-09-09 00:02:50.536 [INFO][4805] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:02:50.605599 containerd[1510]: 2025-09-09 00:02:50.545 [INFO][4805] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:02:50.605599 containerd[1510]: 2025-09-09 00:02:50.550 [INFO][4805] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:02:50.605599 containerd[1510]: 2025-09-09 00:02:50.550 [INFO][4805] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6" host="localhost" Sep 9 00:02:50.605599 containerd[1510]: 2025-09-09 00:02:50.554 [INFO][4805] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6 Sep 9 00:02:50.605599 containerd[1510]: 2025-09-09 00:02:50.562 [INFO][4805] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6" host="localhost" Sep 9 00:02:50.605599 containerd[1510]: 2025-09-09 00:02:50.574 [INFO][4805] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6" host="localhost" Sep 9 00:02:50.605599 containerd[1510]: 2025-09-09 00:02:50.574 [INFO][4805] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6" host="localhost" Sep 9 00:02:50.605599 containerd[1510]: 2025-09-09 00:02:50.574 [INFO][4805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:02:50.605599 containerd[1510]: 2025-09-09 00:02:50.574 [INFO][4805] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6" HandleID="k8s-pod-network.e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6" Workload="localhost-k8s-calico--kube--controllers--68c5d8b85b--fq8gn-eth0" Sep 9 00:02:50.606251 containerd[1510]: 2025-09-09 00:02:50.582 [INFO][4735] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6" Namespace="calico-system" Pod="calico-kube-controllers-68c5d8b85b-fq8gn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68c5d8b85b--fq8gn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68c5d8b85b--fq8gn-eth0", GenerateName:"calico-kube-controllers-68c5d8b85b-", Namespace:"calico-system", SelfLink:"", UID:"57b1360c-3eda-441d-822b-cfab485ba025", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68c5d8b85b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-68c5d8b85b-fq8gn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7ca851bfd66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:02:50.606251 containerd[1510]: 2025-09-09 00:02:50.582 [INFO][4735] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6" Namespace="calico-system" Pod="calico-kube-controllers-68c5d8b85b-fq8gn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68c5d8b85b--fq8gn-eth0" Sep 9 00:02:50.606251 containerd[1510]: 2025-09-09 00:02:50.582 [INFO][4735] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ca851bfd66 ContainerID="e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6" Namespace="calico-system" Pod="calico-kube-controllers-68c5d8b85b-fq8gn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68c5d8b85b--fq8gn-eth0" Sep 9 00:02:50.606251 containerd[1510]: 2025-09-09 00:02:50.587 [INFO][4735] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6" Namespace="calico-system" Pod="calico-kube-controllers-68c5d8b85b-fq8gn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68c5d8b85b--fq8gn-eth0" Sep 9 00:02:50.606251 containerd[1510]: 2025-09-09 00:02:50.587 [INFO][4735] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6" Namespace="calico-system" Pod="calico-kube-controllers-68c5d8b85b-fq8gn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68c5d8b85b--fq8gn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68c5d8b85b--fq8gn-eth0", GenerateName:"calico-kube-controllers-68c5d8b85b-", Namespace:"calico-system", SelfLink:"", UID:"57b1360c-3eda-441d-822b-cfab485ba025", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68c5d8b85b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6", Pod:"calico-kube-controllers-68c5d8b85b-fq8gn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7ca851bfd66", MAC:"7a:a8:ac:ee:d3:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:02:50.606251 containerd[1510]: 2025-09-09 00:02:50.599 [INFO][4735] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6" Namespace="calico-system" Pod="calico-kube-controllers-68c5d8b85b-fq8gn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68c5d8b85b--fq8gn-eth0" Sep 9 00:02:50.612697 containerd[1510]: time="2025-09-09T00:02:50.612076896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:02:50.612697 containerd[1510]: time="2025-09-09T00:02:50.612162307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:02:50.612697 containerd[1510]: time="2025-09-09T00:02:50.612181684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:50.612697 containerd[1510]: time="2025-09-09T00:02:50.612290237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:50.650289 systemd[1]: Started cri-containerd-76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1.scope - libcontainer container 76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1. Sep 9 00:02:50.653819 containerd[1510]: time="2025-09-09T00:02:50.653240198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:02:50.654165 containerd[1510]: time="2025-09-09T00:02:50.653929202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:02:50.654165 containerd[1510]: time="2025-09-09T00:02:50.653959068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:50.654165 containerd[1510]: time="2025-09-09T00:02:50.654103299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:50.655234 sshd[4867]: Accepted publickey for core from 10.0.0.1 port 36342 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:02:50.657364 sshd-session[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:02:50.666574 systemd-logind[1493]: New session 8 of user core. Sep 9 00:02:50.674219 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:02:50.680938 systemd-networkd[1423]: calic2669fdaba7: Link UP Sep 9 00:02:50.681579 systemd-networkd[1423]: calic2669fdaba7: Gained carrier Sep 9 00:02:50.686262 systemd[1]: Started cri-containerd-e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6.scope - libcontainer container e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6. Sep 9 00:02:50.703219 containerd[1510]: 2025-09-09 00:02:50.197 [INFO][4710] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:02:50.703219 containerd[1510]: 2025-09-09 00:02:50.230 [INFO][4710] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79646b996b--cw46r-eth0 calico-apiserver-79646b996b- calico-apiserver abf21d55-a6e7-4fd1-ad4c-e82f7525f680 811 0 2025-09-09 00:02:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79646b996b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79646b996b-cw46r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic2669fdaba7 [] [] }} ContainerID="a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7" Namespace="calico-apiserver" Pod="calico-apiserver-79646b996b-cw46r" WorkloadEndpoint="localhost-k8s-calico--apiserver--79646b996b--cw46r-" Sep 9 00:02:50.703219 containerd[1510]: 2025-09-09 00:02:50.231 [INFO][4710] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7" Namespace="calico-apiserver" Pod="calico-apiserver-79646b996b-cw46r" WorkloadEndpoint="localhost-k8s-calico--apiserver--79646b996b--cw46r-eth0" Sep 9 00:02:50.703219 containerd[1510]: 2025-09-09 00:02:50.315 [INFO][4814] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7" HandleID="k8s-pod-network.a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7" Workload="localhost-k8s-calico--apiserver--79646b996b--cw46r-eth0" Sep 9 00:02:50.703219 containerd[1510]: 2025-09-09 00:02:50.320 [INFO][4814] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7" HandleID="k8s-pod-network.a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7" Workload="localhost-k8s-calico--apiserver--79646b996b--cw46r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5480), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79646b996b-cw46r", "timestamp":"2025-09-09 00:02:50.315641624 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:02:50.703219 containerd[1510]: 2025-09-09 00:02:50.320 [INFO][4814] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:02:50.703219 containerd[1510]: 2025-09-09 00:02:50.574 [INFO][4814] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:02:50.703219 containerd[1510]: 2025-09-09 00:02:50.574 [INFO][4814] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:02:50.703219 containerd[1510]: 2025-09-09 00:02:50.627 [INFO][4814] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7" host="localhost" Sep 9 00:02:50.703219 containerd[1510]: 2025-09-09 00:02:50.634 [INFO][4814] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:02:50.703219 containerd[1510]: 2025-09-09 00:02:50.643 [INFO][4814] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:02:50.703219 containerd[1510]: 2025-09-09 00:02:50.645 [INFO][4814] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:02:50.703219 containerd[1510]: 2025-09-09 00:02:50.647 [INFO][4814] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:02:50.703219 containerd[1510]: 2025-09-09 00:02:50.647 [INFO][4814] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7" host="localhost" Sep 9 00:02:50.703219 containerd[1510]: 2025-09-09 00:02:50.651 [INFO][4814] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7 Sep 9 00:02:50.703219 containerd[1510]: 2025-09-09 00:02:50.657 [INFO][4814] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7" host="localhost" Sep 9 00:02:50.703219 containerd[1510]: 2025-09-09 00:02:50.670 [INFO][4814] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7" host="localhost" Sep 9 00:02:50.703219 containerd[1510]: 2025-09-09 00:02:50.670 [INFO][4814] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7" host="localhost" Sep 9 00:02:50.703219 containerd[1510]: 2025-09-09 00:02:50.670 [INFO][4814] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:02:50.703219 containerd[1510]: 2025-09-09 00:02:50.670 [INFO][4814] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7" HandleID="k8s-pod-network.a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7" Workload="localhost-k8s-calico--apiserver--79646b996b--cw46r-eth0" Sep 9 00:02:50.697625 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:02:50.704113 containerd[1510]: 2025-09-09 00:02:50.677 [INFO][4710] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7" Namespace="calico-apiserver" Pod="calico-apiserver-79646b996b-cw46r" WorkloadEndpoint="localhost-k8s-calico--apiserver--79646b996b--cw46r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79646b996b--cw46r-eth0", GenerateName:"calico-apiserver-79646b996b-", Namespace:"calico-apiserver", SelfLink:"", UID:"abf21d55-a6e7-4fd1-ad4c-e82f7525f680", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 2, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79646b996b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79646b996b-cw46r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2669fdaba7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:02:50.704113 containerd[1510]: 2025-09-09 00:02:50.677 [INFO][4710] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7" Namespace="calico-apiserver" Pod="calico-apiserver-79646b996b-cw46r" WorkloadEndpoint="localhost-k8s-calico--apiserver--79646b996b--cw46r-eth0" Sep 9 00:02:50.704113 containerd[1510]: 2025-09-09 00:02:50.677 [INFO][4710] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2669fdaba7 ContainerID="a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7" Namespace="calico-apiserver" Pod="calico-apiserver-79646b996b-cw46r" WorkloadEndpoint="localhost-k8s-calico--apiserver--79646b996b--cw46r-eth0" Sep 9 00:02:50.704113 containerd[1510]: 2025-09-09 00:02:50.681 [INFO][4710] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7" Namespace="calico-apiserver" Pod="calico-apiserver-79646b996b-cw46r" WorkloadEndpoint="localhost-k8s-calico--apiserver--79646b996b--cw46r-eth0" Sep 9 00:02:50.704113 containerd[1510]: 2025-09-09 00:02:50.681 [INFO][4710] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7" Namespace="calico-apiserver" Pod="calico-apiserver-79646b996b-cw46r" WorkloadEndpoint="localhost-k8s-calico--apiserver--79646b996b--cw46r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79646b996b--cw46r-eth0", GenerateName:"calico-apiserver-79646b996b-", Namespace:"calico-apiserver", SelfLink:"", UID:"abf21d55-a6e7-4fd1-ad4c-e82f7525f680", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 2, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79646b996b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7", Pod:"calico-apiserver-79646b996b-cw46r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2669fdaba7", MAC:"be:d0:77:72:05:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:02:50.704113 containerd[1510]: 2025-09-09 00:02:50.696 [INFO][4710] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7" Namespace="calico-apiserver" Pod="calico-apiserver-79646b996b-cw46r" WorkloadEndpoint="localhost-k8s-calico--apiserver--79646b996b--cw46r-eth0" Sep 9 00:02:50.709273 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:02:50.723356 containerd[1510]: time="2025-09-09T00:02:50.722762062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:02:50.723356 containerd[1510]: time="2025-09-09T00:02:50.722824960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:02:50.723356 containerd[1510]: time="2025-09-09T00:02:50.722835049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:50.723356 containerd[1510]: time="2025-09-09T00:02:50.722966836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:50.732480 containerd[1510]: time="2025-09-09T00:02:50.732350705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79646b996b-6z92f,Uid:2b12fb94-9dbc-4031-90bc-b634460d49b8,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1\"" Sep 9 00:02:50.735463 containerd[1510]: time="2025-09-09T00:02:50.735382637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:02:50.744940 systemd[1]: Started cri-containerd-a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7.scope - libcontainer container a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7. Sep 9 00:02:50.769436 containerd[1510]: time="2025-09-09T00:02:50.768781331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c5d8b85b-fq8gn,Uid:57b1360c-3eda-441d-822b-cfab485ba025,Namespace:calico-system,Attempt:4,} returns sandbox id \"e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6\"" Sep 9 00:02:50.781892 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:02:50.794756 systemd-networkd[1423]: cali8840b34e2c8: Link UP Sep 9 00:02:50.795838 systemd-networkd[1423]: cali8840b34e2c8: Gained carrier Sep 9 00:02:50.819573 containerd[1510]: 2025-09-09 00:02:50.144 [INFO][4658] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:02:50.819573 containerd[1510]: 2025-09-09 00:02:50.156 [INFO][4658] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--rm7mk-eth0 coredns-668d6bf9bc- kube-system 8b3fc743-df1d-4d9a-822b-01f3200e3e51 821 0 2025-09-09 00:02:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-rm7mk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8840b34e2c8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265" Namespace="kube-system" Pod="coredns-668d6bf9bc-rm7mk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rm7mk-" Sep 9 00:02:50.819573 containerd[1510]: 2025-09-09 00:02:50.156 [INFO][4658] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265" Namespace="kube-system" Pod="coredns-668d6bf9bc-rm7mk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rm7mk-eth0" Sep 9 00:02:50.819573 containerd[1510]: 2025-09-09 00:02:50.320 [INFO][4769] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265" HandleID="k8s-pod-network.5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265" Workload="localhost-k8s-coredns--668d6bf9bc--rm7mk-eth0" Sep 9 00:02:50.819573 containerd[1510]: 2025-09-09 00:02:50.320 [INFO][4769] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265" HandleID="k8s-pod-network.5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265" Workload="localhost-k8s-coredns--668d6bf9bc--rm7mk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001ad950), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-rm7mk", "timestamp":"2025-09-09 00:02:50.319995808 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:02:50.819573 containerd[1510]: 2025-09-09 00:02:50.320 [INFO][4769] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:02:50.819573 containerd[1510]: 2025-09-09 00:02:50.672 [INFO][4769] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:02:50.819573 containerd[1510]: 2025-09-09 00:02:50.672 [INFO][4769] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:02:50.819573 containerd[1510]: 2025-09-09 00:02:50.727 [INFO][4769] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265" host="localhost" Sep 9 00:02:50.819573 containerd[1510]: 2025-09-09 00:02:50.737 [INFO][4769] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:02:50.819573 containerd[1510]: 2025-09-09 00:02:50.744 [INFO][4769] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:02:50.819573 containerd[1510]: 2025-09-09 00:02:50.746 [INFO][4769] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:02:50.819573 containerd[1510]: 2025-09-09 00:02:50.750 [INFO][4769] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:02:50.819573 containerd[1510]: 2025-09-09 00:02:50.750 [INFO][4769] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265" host="localhost" Sep 9 00:02:50.819573 containerd[1510]: 2025-09-09 00:02:50.752 [INFO][4769] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265 Sep 9 00:02:50.819573 containerd[1510]: 2025-09-09 00:02:50.760 [INFO][4769] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265" host="localhost" Sep 9 00:02:50.819573 containerd[1510]: 2025-09-09 00:02:50.770 [INFO][4769] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265" host="localhost" Sep 9 00:02:50.819573 containerd[1510]: 2025-09-09 00:02:50.771 [INFO][4769] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265" host="localhost" Sep 9 00:02:50.819573 containerd[1510]: 2025-09-09 00:02:50.771 [INFO][4769] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:02:50.819573 containerd[1510]: 2025-09-09 00:02:50.771 [INFO][4769] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265" HandleID="k8s-pod-network.5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265" Workload="localhost-k8s-coredns--668d6bf9bc--rm7mk-eth0" Sep 9 00:02:50.820204 containerd[1510]: 2025-09-09 00:02:50.786 [INFO][4658] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265" Namespace="kube-system" Pod="coredns-668d6bf9bc-rm7mk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rm7mk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rm7mk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8b3fc743-df1d-4d9a-822b-01f3200e3e51", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 2, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-rm7mk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8840b34e2c8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:02:50.820204 containerd[1510]: 2025-09-09 00:02:50.787 [INFO][4658] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265" Namespace="kube-system" Pod="coredns-668d6bf9bc-rm7mk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rm7mk-eth0" Sep 9 00:02:50.820204 containerd[1510]: 2025-09-09 00:02:50.787 [INFO][4658] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8840b34e2c8 ContainerID="5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265" Namespace="kube-system" Pod="coredns-668d6bf9bc-rm7mk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rm7mk-eth0" Sep 9 00:02:50.820204 containerd[1510]: 2025-09-09 00:02:50.796 [INFO][4658] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265" Namespace="kube-system" Pod="coredns-668d6bf9bc-rm7mk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rm7mk-eth0" Sep 9 00:02:50.820204 containerd[1510]: 2025-09-09 00:02:50.797 [INFO][4658] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265" Namespace="kube-system" Pod="coredns-668d6bf9bc-rm7mk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rm7mk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rm7mk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8b3fc743-df1d-4d9a-822b-01f3200e3e51", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 2, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265", Pod:"coredns-668d6bf9bc-rm7mk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8840b34e2c8", MAC:"9a:24:f9:cb:de:c0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:02:50.820204 containerd[1510]: 2025-09-09 00:02:50.811 [INFO][4658] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265" Namespace="kube-system" Pod="coredns-668d6bf9bc-rm7mk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rm7mk-eth0" Sep 9 00:02:50.826465 containerd[1510]: time="2025-09-09T00:02:50.825868630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79646b996b-cw46r,Uid:abf21d55-a6e7-4fd1-ad4c-e82f7525f680,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7\"" Sep 9 00:02:50.856323 containerd[1510]: time="2025-09-09T00:02:50.856192081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:02:50.856323 containerd[1510]: time="2025-09-09T00:02:50.856259517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:02:50.856323 containerd[1510]: time="2025-09-09T00:02:50.856275126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:50.856652 containerd[1510]: time="2025-09-09T00:02:50.856370967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:50.866388 sshd[4938]: Connection closed by 10.0.0.1 port 36342 Sep 9 00:02:50.867576 sshd-session[4867]: pam_unix(sshd:session): session closed for user core Sep 9 00:02:50.885553 systemd-networkd[1423]: calib0f4fc0d6e6: Link UP Sep 9 00:02:50.885845 systemd-networkd[1423]: calib0f4fc0d6e6: Gained carrier Sep 9 00:02:50.901725 systemd[1]: sshd@7-10.0.0.133:22-10.0.0.1:36342.service: Deactivated successfully. Sep 9 00:02:50.904368 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:02:50.905160 systemd-logind[1493]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:02:50.921443 systemd[1]: Started cri-containerd-5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265.scope - libcontainer container 5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265. Sep 9 00:02:50.922491 systemd-logind[1493]: Removed session 8. Sep 9 00:02:50.926150 containerd[1510]: 2025-09-09 00:02:50.115 [INFO][4684] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:02:50.926150 containerd[1510]: 2025-09-09 00:02:50.147 [INFO][4684] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--b854f49bb--nlfqw-eth0 whisker-b854f49bb- calico-system fe12ff7b-73e6-42d0-a348-29ad8070fac9 974 0 2025-09-09 00:02:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:b854f49bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-b854f49bb-nlfqw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib0f4fc0d6e6 [] [] }} ContainerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" Namespace="calico-system" Pod="whisker-b854f49bb-nlfqw" WorkloadEndpoint="localhost-k8s-whisker--b854f49bb--nlfqw-" Sep 9 00:02:50.926150 containerd[1510]: 2025-09-09 00:02:50.147 [INFO][4684] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" Namespace="calico-system" Pod="whisker-b854f49bb-nlfqw" WorkloadEndpoint="localhost-k8s-whisker--b854f49bb--nlfqw-eth0" Sep 9 00:02:50.926150 containerd[1510]: 2025-09-09 00:02:50.322 [INFO][4767] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" HandleID="k8s-pod-network.fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" Workload="localhost-k8s-whisker--b854f49bb--nlfqw-eth0" Sep 9 00:02:50.926150 containerd[1510]: 2025-09-09 00:02:50.322 [INFO][4767] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" HandleID="k8s-pod-network.fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" Workload="localhost-k8s-whisker--b854f49bb--nlfqw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a5760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-b854f49bb-nlfqw", "timestamp":"2025-09-09 00:02:50.322176142 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:02:50.926150 containerd[1510]: 2025-09-09 00:02:50.322 [INFO][4767] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:02:50.926150 containerd[1510]: 2025-09-09 00:02:50.771 [INFO][4767] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:02:50.926150 containerd[1510]: 2025-09-09 00:02:50.771 [INFO][4767] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:02:50.926150 containerd[1510]: 2025-09-09 00:02:50.830 [INFO][4767] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" host="localhost" Sep 9 00:02:50.926150 containerd[1510]: 2025-09-09 00:02:50.840 [INFO][4767] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:02:50.926150 containerd[1510]: 2025-09-09 00:02:50.847 [INFO][4767] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:02:50.926150 containerd[1510]: 2025-09-09 00:02:50.850 [INFO][4767] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:02:50.926150 containerd[1510]: 2025-09-09 00:02:50.853 [INFO][4767] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:02:50.926150 containerd[1510]: 2025-09-09 00:02:50.853 [INFO][4767] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" host="localhost" Sep 9 00:02:50.926150 containerd[1510]: 2025-09-09 00:02:50.856 [INFO][4767] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660 Sep 9 00:02:50.926150 containerd[1510]: 2025-09-09 00:02:50.860 [INFO][4767] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" host="localhost" Sep 9 00:02:50.926150 containerd[1510]: 2025-09-09 00:02:50.872 [INFO][4767] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" host="localhost" Sep 9 00:02:50.926150 containerd[1510]: 2025-09-09 00:02:50.872 [INFO][4767] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" host="localhost" Sep 9 00:02:50.926150 containerd[1510]: 2025-09-09 00:02:50.874 [INFO][4767] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:02:50.926150 containerd[1510]: 2025-09-09 00:02:50.874 [INFO][4767] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" HandleID="k8s-pod-network.fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" Workload="localhost-k8s-whisker--b854f49bb--nlfqw-eth0" Sep 9 00:02:50.926733 containerd[1510]: 2025-09-09 00:02:50.877 [INFO][4684] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" Namespace="calico-system" Pod="whisker-b854f49bb-nlfqw" WorkloadEndpoint="localhost-k8s-whisker--b854f49bb--nlfqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--b854f49bb--nlfqw-eth0", GenerateName:"whisker-b854f49bb-", Namespace:"calico-system", SelfLink:"", UID:"fe12ff7b-73e6-42d0-a348-29ad8070fac9", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 2, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b854f49bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-b854f49bb-nlfqw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib0f4fc0d6e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:02:50.926733 containerd[1510]: 2025-09-09 00:02:50.877 [INFO][4684] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" Namespace="calico-system" Pod="whisker-b854f49bb-nlfqw" WorkloadEndpoint="localhost-k8s-whisker--b854f49bb--nlfqw-eth0" Sep 9 00:02:50.926733 containerd[1510]: 2025-09-09 00:02:50.877 [INFO][4684] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0f4fc0d6e6 ContainerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" Namespace="calico-system" Pod="whisker-b854f49bb-nlfqw" WorkloadEndpoint="localhost-k8s-whisker--b854f49bb--nlfqw-eth0" Sep 9 00:02:50.926733 containerd[1510]: 2025-09-09 00:02:50.882 [INFO][4684] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" Namespace="calico-system" Pod="whisker-b854f49bb-nlfqw" WorkloadEndpoint="localhost-k8s-whisker--b854f49bb--nlfqw-eth0" Sep 9 00:02:50.926733 containerd[1510]: 2025-09-09 00:02:50.882 [INFO][4684] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" Namespace="calico-system" Pod="whisker-b854f49bb-nlfqw" WorkloadEndpoint="localhost-k8s-whisker--b854f49bb--nlfqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--b854f49bb--nlfqw-eth0", GenerateName:"whisker-b854f49bb-", Namespace:"calico-system", SelfLink:"", UID:"fe12ff7b-73e6-42d0-a348-29ad8070fac9", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 2, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b854f49bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660", Pod:"whisker-b854f49bb-nlfqw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib0f4fc0d6e6", MAC:"0a:de:a9:d0:de:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:02:50.926733 containerd[1510]: 2025-09-09 00:02:50.922 [INFO][4684] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" Namespace="calico-system" Pod="whisker-b854f49bb-nlfqw" WorkloadEndpoint="localhost-k8s-whisker--b854f49bb--nlfqw-eth0" Sep 9 00:02:50.938372 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:02:50.967728 containerd[1510]: time="2025-09-09T00:02:50.967689902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rm7mk,Uid:8b3fc743-df1d-4d9a-822b-01f3200e3e51,Namespace:kube-system,Attempt:4,} returns sandbox id \"5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265\"" Sep 9 00:02:50.968540 kubelet[2630]: E0909 00:02:50.968520 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:50.970717 containerd[1510]: time="2025-09-09T00:02:50.970681419Z" level=info msg="CreateContainer within sandbox \"5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:02:50.980930 containerd[1510]: time="2025-09-09T00:02:50.980766583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:02:50.985741 containerd[1510]: time="2025-09-09T00:02:50.980882380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:02:50.985741 containerd[1510]: time="2025-09-09T00:02:50.984987287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:50.985741 containerd[1510]: time="2025-09-09T00:02:50.985154552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:51.006187 systemd[1]: Started cri-containerd-fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660.scope - libcontainer container fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660. Sep 9 00:02:51.023962 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:02:51.024746 systemd-networkd[1423]: calid18df78b173: Link UP Sep 9 00:02:51.026375 systemd-networkd[1423]: calid18df78b173: Gained carrier Sep 9 00:02:51.026449 containerd[1510]: time="2025-09-09T00:02:51.026396629Z" level=info msg="CreateContainer within sandbox \"5bd42b36113c5cf44983c40e56c1f4288e242abb81f0b024c760f9642cda7265\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19654bde64e4c151e156b289a9a023d08b87e977d6c48dc0f08a45f3f6900dbf\"" Sep 9 00:02:51.029106 containerd[1510]: time="2025-09-09T00:02:51.028959260Z" level=info msg="StartContainer for \"19654bde64e4c151e156b289a9a023d08b87e977d6c48dc0f08a45f3f6900dbf\"" Sep 9 00:02:51.054187 containerd[1510]: 2025-09-09 00:02:50.184 [INFO][4661] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:02:51.054187 containerd[1510]: 2025-09-09 00:02:50.206 [INFO][4661] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--nbs8t-eth0 csi-node-driver- calico-system 13bd77bc-168d-4e24-bcab-4df0554bc784 706 0 2025-09-09 00:02:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-nbs8t eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid18df78b173 [] [] }} ContainerID="d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c" Namespace="calico-system" Pod="csi-node-driver-nbs8t" WorkloadEndpoint="localhost-k8s-csi--node--driver--nbs8t-" Sep 9 00:02:51.054187 containerd[1510]: 2025-09-09 00:02:50.206 [INFO][4661] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c" Namespace="calico-system" Pod="csi-node-driver-nbs8t" WorkloadEndpoint="localhost-k8s-csi--node--driver--nbs8t-eth0" Sep 9 00:02:51.054187 containerd[1510]: 2025-09-09 00:02:50.323 [INFO][4792] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c" HandleID="k8s-pod-network.d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c" Workload="localhost-k8s-csi--node--driver--nbs8t-eth0" Sep 9 00:02:51.054187 containerd[1510]: 2025-09-09 00:02:50.323 [INFO][4792] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c" HandleID="k8s-pod-network.d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c" Workload="localhost-k8s-csi--node--driver--nbs8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004460f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-nbs8t", "timestamp":"2025-09-09 00:02:50.323247734 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:02:51.054187 containerd[1510]: 2025-09-09 00:02:50.323 [INFO][4792] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:02:51.054187 containerd[1510]: 2025-09-09 00:02:50.872 [INFO][4792] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:02:51.054187 containerd[1510]: 2025-09-09 00:02:50.872 [INFO][4792] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:02:51.054187 containerd[1510]: 2025-09-09 00:02:50.959 [INFO][4792] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c" host="localhost" Sep 9 00:02:51.054187 containerd[1510]: 2025-09-09 00:02:50.968 [INFO][4792] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:02:51.054187 containerd[1510]: 2025-09-09 00:02:50.978 [INFO][4792] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:02:51.054187 containerd[1510]: 2025-09-09 00:02:50.980 [INFO][4792] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:02:51.054187 containerd[1510]: 2025-09-09 00:02:50.983 [INFO][4792] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:02:51.054187 containerd[1510]: 2025-09-09 00:02:50.983 [INFO][4792] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c" host="localhost" Sep 9 00:02:51.054187 containerd[1510]: 2025-09-09 00:02:50.985 [INFO][4792] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c Sep 9 00:02:51.054187 containerd[1510]: 2025-09-09 00:02:50.994 [INFO][4792] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c" host="localhost" Sep 9 00:02:51.054187 containerd[1510]: 2025-09-09 00:02:51.009 [INFO][4792] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c" host="localhost" Sep 9 00:02:51.054187 containerd[1510]: 2025-09-09 00:02:51.009 [INFO][4792] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c" host="localhost" Sep 9 00:02:51.054187 containerd[1510]: 2025-09-09 00:02:51.016 [INFO][4792] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:02:51.054187 containerd[1510]: 2025-09-09 00:02:51.016 [INFO][4792] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c" HandleID="k8s-pod-network.d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c" Workload="localhost-k8s-csi--node--driver--nbs8t-eth0" Sep 9 00:02:51.054993 containerd[1510]: 2025-09-09 00:02:51.020 [INFO][4661] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c" Namespace="calico-system" Pod="csi-node-driver-nbs8t" WorkloadEndpoint="localhost-k8s-csi--node--driver--nbs8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nbs8t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"13bd77bc-168d-4e24-bcab-4df0554bc784", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-nbs8t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid18df78b173", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:02:51.054993 containerd[1510]: 2025-09-09 00:02:51.022 [INFO][4661] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c" Namespace="calico-system" Pod="csi-node-driver-nbs8t" WorkloadEndpoint="localhost-k8s-csi--node--driver--nbs8t-eth0" Sep 9 00:02:51.054993 containerd[1510]: 2025-09-09 00:02:51.022 [INFO][4661] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid18df78b173 ContainerID="d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c" Namespace="calico-system" Pod="csi-node-driver-nbs8t" WorkloadEndpoint="localhost-k8s-csi--node--driver--nbs8t-eth0" Sep 9 00:02:51.054993 containerd[1510]: 2025-09-09 00:02:51.025 [INFO][4661] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c" Namespace="calico-system" Pod="csi-node-driver-nbs8t" WorkloadEndpoint="localhost-k8s-csi--node--driver--nbs8t-eth0" Sep 9 00:02:51.054993 containerd[1510]: 2025-09-09 00:02:51.026 [INFO][4661] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c" Namespace="calico-system" Pod="csi-node-driver-nbs8t" WorkloadEndpoint="localhost-k8s-csi--node--driver--nbs8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nbs8t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"13bd77bc-168d-4e24-bcab-4df0554bc784", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 2, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c", Pod:"csi-node-driver-nbs8t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid18df78b173", MAC:"9e:5c:83:3a:df:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:02:51.054993 containerd[1510]: 2025-09-09 00:02:51.047 [INFO][4661] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c" Namespace="calico-system" Pod="csi-node-driver-nbs8t" WorkloadEndpoint="localhost-k8s-csi--node--driver--nbs8t-eth0" Sep 9 00:02:51.072451 containerd[1510]: time="2025-09-09T00:02:51.072402578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b854f49bb-nlfqw,Uid:fe12ff7b-73e6-42d0-a348-29ad8070fac9,Namespace:calico-system,Attempt:4,} returns sandbox id \"fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660\"" Sep 9 00:02:51.079569 systemd[1]: Started cri-containerd-19654bde64e4c151e156b289a9a023d08b87e977d6c48dc0f08a45f3f6900dbf.scope - libcontainer container 19654bde64e4c151e156b289a9a023d08b87e977d6c48dc0f08a45f3f6900dbf. Sep 9 00:02:51.091065 containerd[1510]: time="2025-09-09T00:02:51.090858746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:02:51.091065 containerd[1510]: time="2025-09-09T00:02:51.090983621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:02:51.091065 containerd[1510]: time="2025-09-09T00:02:51.091004981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:51.091323 containerd[1510]: time="2025-09-09T00:02:51.091201489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:51.119812 systemd-networkd[1423]: cali0587918f5ae: Link UP Sep 9 00:02:51.120324 systemd[1]: Started cri-containerd-d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c.scope - libcontainer container d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c. Sep 9 00:02:51.121110 systemd-networkd[1423]: cali0587918f5ae: Gained carrier Sep 9 00:02:51.139878 containerd[1510]: 2025-09-09 00:02:50.198 [INFO][4705] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:02:51.139878 containerd[1510]: 2025-09-09 00:02:50.236 [INFO][4705] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--4p4qb-eth0 coredns-668d6bf9bc- kube-system 6de14301-f214-422d-9e12-0b69107cbf97 815 0 2025-09-09 00:02:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-4p4qb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0587918f5ae [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a" Namespace="kube-system" Pod="coredns-668d6bf9bc-4p4qb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4p4qb-" Sep 9 00:02:51.139878 containerd[1510]: 2025-09-09 00:02:50.236 [INFO][4705] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a" Namespace="kube-system" Pod="coredns-668d6bf9bc-4p4qb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4p4qb-eth0" Sep 9 00:02:51.139878 containerd[1510]: 2025-09-09 00:02:50.328 [INFO][4812] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a" HandleID="k8s-pod-network.9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a" Workload="localhost-k8s-coredns--668d6bf9bc--4p4qb-eth0" Sep 9 00:02:51.139878 containerd[1510]: 2025-09-09 00:02:50.328 [INFO][4812] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a" HandleID="k8s-pod-network.9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a" Workload="localhost-k8s-coredns--668d6bf9bc--4p4qb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135730), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-4p4qb", "timestamp":"2025-09-09 00:02:50.328459699 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:02:51.139878 containerd[1510]: 2025-09-09 00:02:50.328 [INFO][4812] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:02:51.139878 containerd[1510]: 2025-09-09 00:02:51.012 [INFO][4812] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:02:51.139878 containerd[1510]: 2025-09-09 00:02:51.012 [INFO][4812] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:02:51.139878 containerd[1510]: 2025-09-09 00:02:51.029 [INFO][4812] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a" host="localhost" Sep 9 00:02:51.139878 containerd[1510]: 2025-09-09 00:02:51.069 [INFO][4812] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:02:51.139878 containerd[1510]: 2025-09-09 00:02:51.077 [INFO][4812] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:02:51.139878 containerd[1510]: 2025-09-09 00:02:51.081 [INFO][4812] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:02:51.139878 containerd[1510]: 2025-09-09 00:02:51.085 [INFO][4812] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:02:51.139878 containerd[1510]: 2025-09-09 00:02:51.085 [INFO][4812] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a" host="localhost" Sep 9 00:02:51.139878 containerd[1510]: 2025-09-09 00:02:51.087 [INFO][4812] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a Sep 9 00:02:51.139878 containerd[1510]: 2025-09-09 00:02:51.092 [INFO][4812] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a" host="localhost" Sep 9 00:02:51.139878 containerd[1510]: 2025-09-09 00:02:51.102 [INFO][4812] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a" host="localhost" Sep 9 00:02:51.139878 containerd[1510]: 2025-09-09 00:02:51.105 [INFO][4812] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a" host="localhost" Sep 9 00:02:51.139878 containerd[1510]: 2025-09-09 00:02:51.105 [INFO][4812] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:02:51.139878 containerd[1510]: 2025-09-09 00:02:51.105 [INFO][4812] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a" HandleID="k8s-pod-network.9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a" Workload="localhost-k8s-coredns--668d6bf9bc--4p4qb-eth0" Sep 9 00:02:51.140659 containerd[1510]: 2025-09-09 00:02:51.116 [INFO][4705] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a" Namespace="kube-system" Pod="coredns-668d6bf9bc-4p4qb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4p4qb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4p4qb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6de14301-f214-422d-9e12-0b69107cbf97", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 2, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-4p4qb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0587918f5ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:02:51.140659 containerd[1510]: 2025-09-09 00:02:51.116 [INFO][4705] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a" Namespace="kube-system" Pod="coredns-668d6bf9bc-4p4qb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4p4qb-eth0" Sep 9 00:02:51.140659 containerd[1510]: 2025-09-09 00:02:51.116 [INFO][4705] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0587918f5ae ContainerID="9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a" Namespace="kube-system" Pod="coredns-668d6bf9bc-4p4qb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4p4qb-eth0" Sep 9 00:02:51.140659 containerd[1510]: 2025-09-09 00:02:51.120 [INFO][4705] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a" Namespace="kube-system" Pod="coredns-668d6bf9bc-4p4qb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4p4qb-eth0" Sep 9 00:02:51.140659 containerd[1510]: 2025-09-09 00:02:51.120 [INFO][4705] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a" Namespace="kube-system" Pod="coredns-668d6bf9bc-4p4qb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4p4qb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4p4qb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6de14301-f214-422d-9e12-0b69107cbf97", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 2, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a", Pod:"coredns-668d6bf9bc-4p4qb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0587918f5ae", MAC:"66:ff:ff:1f:ed:e4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:02:51.140659 containerd[1510]: 2025-09-09 00:02:51.133 [INFO][4705] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a" Namespace="kube-system" Pod="coredns-668d6bf9bc-4p4qb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4p4qb-eth0" Sep 9 00:02:51.145762 containerd[1510]: time="2025-09-09T00:02:51.145428262Z" level=info msg="StartContainer for \"19654bde64e4c151e156b289a9a023d08b87e977d6c48dc0f08a45f3f6900dbf\" returns successfully" Sep 9 00:02:51.151789 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:02:51.168835 containerd[1510]: time="2025-09-09T00:02:51.168763771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbs8t,Uid:13bd77bc-168d-4e24-bcab-4df0554bc784,Namespace:calico-system,Attempt:4,} returns sandbox id \"d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c\"" Sep 9 00:02:51.172455 containerd[1510]: time="2025-09-09T00:02:51.172345265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:02:51.172607 containerd[1510]: time="2025-09-09T00:02:51.172472333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:02:51.172607 containerd[1510]: time="2025-09-09T00:02:51.172521014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:51.172801 containerd[1510]: time="2025-09-09T00:02:51.172700010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:51.195286 systemd[1]: Started cri-containerd-9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a.scope - libcontainer container 9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a. Sep 9 00:02:51.209216 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:02:51.236591 containerd[1510]: time="2025-09-09T00:02:51.236536945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4p4qb,Uid:6de14301-f214-422d-9e12-0b69107cbf97,Namespace:kube-system,Attempt:5,} returns sandbox id \"9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a\"" Sep 9 00:02:51.238112 kubelet[2630]: E0909 00:02:51.237478 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:51.239974 containerd[1510]: time="2025-09-09T00:02:51.239935775Z" level=info msg="CreateContainer within sandbox \"9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:02:51.562093 systemd-networkd[1423]: caliab58ce52d2c: Link UP Sep 9 00:02:51.562429 systemd-networkd[1423]: caliab58ce52d2c: Gained carrier Sep 9 00:02:51.568327 systemd-networkd[1423]: calidc37378c79f: Gained IPv6LL Sep 9 00:02:51.573383 containerd[1510]: time="2025-09-09T00:02:51.573320723Z" level=info msg="CreateContainer within sandbox \"9732689d02c9bf2dd494dd2e1bd26c40a657228a077e63685a4692d78004127a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aaa268098f932c85715b3013fc95cd6dbd0c07f26e03cfe16cc2607fae0eb532\"" Sep 9 00:02:51.574861 containerd[1510]: time="2025-09-09T00:02:51.574107289Z" level=info msg="StartContainer for \"aaa268098f932c85715b3013fc95cd6dbd0c07f26e03cfe16cc2607fae0eb532\"" Sep 9 00:02:51.603133 containerd[1510]: 2025-09-09 00:02:50.196 [INFO][4722] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:02:51.603133 containerd[1510]: 2025-09-09 00:02:50.237 [INFO][4722] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--ct9bv-eth0 goldmane-54d579b49d- calico-system 9d38a1be-6323-41e6-8564-b477a0eb94a8 824 0 2025-09-09 00:02:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-ct9bv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliab58ce52d2c [] [] }} ContainerID="2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a" Namespace="calico-system" Pod="goldmane-54d579b49d-ct9bv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ct9bv-" Sep 9 00:02:51.603133 containerd[1510]: 2025-09-09 00:02:50.237 [INFO][4722] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a" Namespace="calico-system" Pod="goldmane-54d579b49d-ct9bv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ct9bv-eth0" Sep 9 00:02:51.603133 containerd[1510]: 2025-09-09 00:02:50.331 [INFO][4821] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a" HandleID="k8s-pod-network.2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a" Workload="localhost-k8s-goldmane--54d579b49d--ct9bv-eth0" Sep 9 00:02:51.603133 containerd[1510]: 2025-09-09 00:02:50.335 [INFO][4821] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a" HandleID="k8s-pod-network.2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a" Workload="localhost-k8s-goldmane--54d579b49d--ct9bv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019f5b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-ct9bv", "timestamp":"2025-09-09 00:02:50.331445204 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:02:51.603133 containerd[1510]: 2025-09-09 00:02:50.335 [INFO][4821] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:02:51.603133 containerd[1510]: 2025-09-09 00:02:51.105 [INFO][4821] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:02:51.603133 containerd[1510]: 2025-09-09 00:02:51.106 [INFO][4821] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:02:51.603133 containerd[1510]: 2025-09-09 00:02:51.132 [INFO][4821] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a" host="localhost" Sep 9 00:02:51.603133 containerd[1510]: 2025-09-09 00:02:51.170 [INFO][4821] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:02:51.603133 containerd[1510]: 2025-09-09 00:02:51.177 [INFO][4821] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:02:51.603133 containerd[1510]: 2025-09-09 00:02:51.232 [INFO][4821] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:02:51.603133 containerd[1510]: 2025-09-09 00:02:51.235 [INFO][4821] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:02:51.603133 containerd[1510]: 2025-09-09 00:02:51.235 [INFO][4821] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a" host="localhost" Sep 9 00:02:51.603133 containerd[1510]: 2025-09-09 00:02:51.236 [INFO][4821] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a Sep 9 00:02:51.603133 containerd[1510]: 2025-09-09 00:02:51.502 [INFO][4821] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a" host="localhost" Sep 9 00:02:51.603133 containerd[1510]: 2025-09-09 00:02:51.544 [INFO][4821] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a" host="localhost" Sep 9 00:02:51.603133 containerd[1510]: 2025-09-09 00:02:51.545 [INFO][4821] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a" host="localhost" Sep 9 00:02:51.603133 containerd[1510]: 2025-09-09 00:02:51.545 [INFO][4821] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:02:51.603133 containerd[1510]: 2025-09-09 00:02:51.545 [INFO][4821] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a" HandleID="k8s-pod-network.2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a" Workload="localhost-k8s-goldmane--54d579b49d--ct9bv-eth0" Sep 9 00:02:51.604428 containerd[1510]: 2025-09-09 00:02:51.553 [INFO][4722] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a" Namespace="calico-system" Pod="goldmane-54d579b49d-ct9bv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ct9bv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--ct9bv-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"9d38a1be-6323-41e6-8564-b477a0eb94a8", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 2, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-ct9bv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliab58ce52d2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:02:51.604428 containerd[1510]: 2025-09-09 00:02:51.553 [INFO][4722] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a" Namespace="calico-system" Pod="goldmane-54d579b49d-ct9bv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ct9bv-eth0" Sep 9 00:02:51.604428 containerd[1510]: 2025-09-09 00:02:51.553 [INFO][4722] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab58ce52d2c ContainerID="2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a" Namespace="calico-system" Pod="goldmane-54d579b49d-ct9bv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ct9bv-eth0" Sep 9 00:02:51.604428 containerd[1510]: 2025-09-09 00:02:51.563 [INFO][4722] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a" Namespace="calico-system" Pod="goldmane-54d579b49d-ct9bv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ct9bv-eth0" Sep 9 00:02:51.604428 containerd[1510]: 2025-09-09 00:02:51.566 [INFO][4722] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a" Namespace="calico-system" Pod="goldmane-54d579b49d-ct9bv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ct9bv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--ct9bv-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"9d38a1be-6323-41e6-8564-b477a0eb94a8", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 2, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a", Pod:"goldmane-54d579b49d-ct9bv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliab58ce52d2c", MAC:"66:fa:d3:a3:70:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:02:51.604428 containerd[1510]: 2025-09-09 00:02:51.593 [INFO][4722] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a" Namespace="calico-system" Pod="goldmane-54d579b49d-ct9bv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ct9bv-eth0" Sep 9 00:02:51.631463 systemd[1]: Started cri-containerd-aaa268098f932c85715b3013fc95cd6dbd0c07f26e03cfe16cc2607fae0eb532.scope - libcontainer container aaa268098f932c85715b3013fc95cd6dbd0c07f26e03cfe16cc2607fae0eb532. Sep 9 00:02:51.631535 systemd-networkd[1423]: cali7ca851bfd66: Gained IPv6LL Sep 9 00:02:51.658996 containerd[1510]: time="2025-09-09T00:02:51.658543685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:02:51.658996 containerd[1510]: time="2025-09-09T00:02:51.658624437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:02:51.658996 containerd[1510]: time="2025-09-09T00:02:51.658643653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:51.658996 containerd[1510]: time="2025-09-09T00:02:51.658761835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:02:51.681114 kernel: bpftool[5441]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 9 00:02:51.685226 systemd[1]: Started cri-containerd-2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a.scope - libcontainer container 2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a. Sep 9 00:02:51.695513 containerd[1510]: time="2025-09-09T00:02:51.694573175Z" level=info msg="StartContainer for \"aaa268098f932c85715b3013fc95cd6dbd0c07f26e03cfe16cc2607fae0eb532\" returns successfully" Sep 9 00:02:51.721346 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:02:51.761807 containerd[1510]: time="2025-09-09T00:02:51.761672403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ct9bv,Uid:9d38a1be-6323-41e6-8564-b477a0eb94a8,Namespace:calico-system,Attempt:5,} returns sandbox id \"2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a\"" Sep 9 00:02:51.880776 kubelet[2630]: E0909 00:02:51.880623 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:51.883890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1386519033.mount: Deactivated successfully. Sep 9 00:02:51.893069 kubelet[2630]: E0909 00:02:51.889207 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:51.900959 kubelet[2630]: I0909 00:02:51.900784 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rm7mk" podStartSLOduration=36.900764326 podStartE2EDuration="36.900764326s" podCreationTimestamp="2025-09-09 00:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:02:51.900275007 +0000 UTC m=+42.567845839" watchObservedRunningTime="2025-09-09 00:02:51.900764326 +0000 UTC m=+42.568335147" Sep 9 00:02:51.944048 kubelet[2630]: I0909 00:02:51.943952 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4p4qb" podStartSLOduration=36.943932819 podStartE2EDuration="36.943932819s" podCreationTimestamp="2025-09-09 00:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:02:51.917605554 +0000 UTC m=+42.585176375" watchObservedRunningTime="2025-09-09 00:02:51.943932819 +0000 UTC m=+42.611503640" Sep 9 00:02:52.058110 systemd-networkd[1423]: vxlan.calico: Link UP Sep 9 00:02:52.058121 systemd-networkd[1423]: vxlan.calico: Gained carrier Sep 9 00:02:52.078479 systemd-networkd[1423]: cali8840b34e2c8: Gained IPv6LL Sep 9 00:02:52.078815 systemd-networkd[1423]: calib0f4fc0d6e6: Gained IPv6LL Sep 9 00:02:52.205288 systemd-networkd[1423]: calic2669fdaba7: Gained IPv6LL Sep 9 00:02:52.589225 systemd-networkd[1423]: cali0587918f5ae: Gained IPv6LL Sep 9 00:02:52.845215 systemd-networkd[1423]: calid18df78b173: Gained IPv6LL Sep 9 00:02:52.897201 kubelet[2630]: E0909 00:02:52.897063 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:52.897201 kubelet[2630]: E0909 00:02:52.897082 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:52.973776 systemd-networkd[1423]: caliab58ce52d2c: Gained IPv6LL Sep 9 00:02:53.335137 containerd[1510]: time="2025-09-09T00:02:53.335067507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:53.336336 containerd[1510]: time="2025-09-09T00:02:53.336272589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 9 00:02:53.337987 containerd[1510]: time="2025-09-09T00:02:53.337927485Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:53.340721 containerd[1510]: time="2025-09-09T00:02:53.340661778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:53.341291 containerd[1510]: time="2025-09-09T00:02:53.341247468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.605819356s" Sep 9 00:02:53.341291 containerd[1510]: time="2025-09-09T00:02:53.341287252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:02:53.342397 containerd[1510]: time="2025-09-09T00:02:53.342371909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 00:02:53.343547 containerd[1510]: time="2025-09-09T00:02:53.343520484Z" level=info msg="CreateContainer within sandbox \"76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:02:53.366015 containerd[1510]: time="2025-09-09T00:02:53.365959617Z" level=info msg="CreateContainer within sandbox \"76c10075146e6901cd5fc760853af1faa933e3f2e086237dd13f9ce7d8e547b1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"effc91dea7361d603c120cdd0708d7a00694dc92d9e67e6ffaae8e2331316d66\"" Sep 9 00:02:53.366556 containerd[1510]: time="2025-09-09T00:02:53.366519597Z" level=info msg="StartContainer for \"effc91dea7361d603c120cdd0708d7a00694dc92d9e67e6ffaae8e2331316d66\"" Sep 9 00:02:53.398260 systemd[1]: Started cri-containerd-effc91dea7361d603c120cdd0708d7a00694dc92d9e67e6ffaae8e2331316d66.scope - libcontainer container effc91dea7361d603c120cdd0708d7a00694dc92d9e67e6ffaae8e2331316d66. Sep 9 00:02:53.454738 containerd[1510]: time="2025-09-09T00:02:53.454679066Z" level=info msg="StartContainer for \"effc91dea7361d603c120cdd0708d7a00694dc92d9e67e6ffaae8e2331316d66\" returns successfully" Sep 9 00:02:53.902392 kubelet[2630]: E0909 00:02:53.902175 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:53.902392 kubelet[2630]: E0909 00:02:53.902207 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:02:53.917361 kubelet[2630]: I0909 00:02:53.917277 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79646b996b-6z92f" podStartSLOduration=28.309888177 podStartE2EDuration="30.917253976s" podCreationTimestamp="2025-09-09 00:02:23 +0000 UTC" firstStartedPulling="2025-09-09 00:02:50.734805133 +0000 UTC m=+41.402375954" lastFinishedPulling="2025-09-09 00:02:53.342170912 +0000 UTC m=+44.009741753" observedRunningTime="2025-09-09 00:02:53.914158676 +0000 UTC m=+44.581729507" watchObservedRunningTime="2025-09-09 00:02:53.917253976 +0000 UTC m=+44.584824797" Sep 9 00:02:54.126801 systemd-networkd[1423]: vxlan.calico: Gained IPv6LL Sep 9 00:02:54.904928 kubelet[2630]: I0909 00:02:54.904868 2630 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:02:55.890707 systemd[1]: Started sshd@8-10.0.0.133:22-10.0.0.1:36354.service - OpenSSH per-connection server daemon (10.0.0.1:36354). Sep 9 00:02:55.964606 sshd[5612]: Accepted publickey for core from 10.0.0.1 port 36354 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:02:55.968330 sshd-session[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:02:55.975701 systemd-logind[1493]: New session 9 of user core. Sep 9 00:02:55.984294 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:02:56.138822 sshd[5614]: Connection closed by 10.0.0.1 port 36354 Sep 9 00:02:56.139148 sshd-session[5612]: pam_unix(sshd:session): session closed for user core Sep 9 00:02:56.145162 systemd-logind[1493]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:02:56.146244 systemd[1]: sshd@8-10.0.0.133:22-10.0.0.1:36354.service: Deactivated successfully. Sep 9 00:02:56.148898 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:02:56.150930 systemd-logind[1493]: Removed session 9. Sep 9 00:02:56.805563 containerd[1510]: time="2025-09-09T00:02:56.805478548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:56.806594 containerd[1510]: time="2025-09-09T00:02:56.806535020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 9 00:02:56.807847 containerd[1510]: time="2025-09-09T00:02:56.807814633Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:56.810038 containerd[1510]: time="2025-09-09T00:02:56.809985617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:56.810826 containerd[1510]: time="2025-09-09T00:02:56.810776531Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.468372171s" Sep 9 00:02:56.810826 containerd[1510]: time="2025-09-09T00:02:56.810813791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 9 00:02:56.812133 containerd[1510]: time="2025-09-09T00:02:56.811862350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:02:56.821906 containerd[1510]: time="2025-09-09T00:02:56.821864051Z" level=info msg="CreateContainer within sandbox \"e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 00:02:56.847016 containerd[1510]: time="2025-09-09T00:02:56.846952109Z" level=info msg="CreateContainer within sandbox \"e49ac02cc21b176d9ccc618c8980ebbf2a5671e6093f40e0b4fcf7e9a6b3d1e6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2ecfe087687b59d5aa14193c76aa35cb16a04fd2caf3162c6d7c466d518e0602\"" Sep 9 00:02:56.847547 containerd[1510]: time="2025-09-09T00:02:56.847518623Z" level=info msg="StartContainer for \"2ecfe087687b59d5aa14193c76aa35cb16a04fd2caf3162c6d7c466d518e0602\"" Sep 9 00:02:56.883230 systemd[1]: Started cri-containerd-2ecfe087687b59d5aa14193c76aa35cb16a04fd2caf3162c6d7c466d518e0602.scope - libcontainer container 2ecfe087687b59d5aa14193c76aa35cb16a04fd2caf3162c6d7c466d518e0602. Sep 9 00:02:57.273121 containerd[1510]: time="2025-09-09T00:02:57.273013992Z" level=info msg="StartContainer for \"2ecfe087687b59d5aa14193c76aa35cb16a04fd2caf3162c6d7c466d518e0602\" returns successfully" Sep 9 00:02:57.306392 containerd[1510]: time="2025-09-09T00:02:57.306330551Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:02:57.307632 containerd[1510]: time="2025-09-09T00:02:57.307037728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 00:02:57.309435 containerd[1510]: time="2025-09-09T00:02:57.309402216Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 497.509199ms" Sep 9 00:02:57.309506 containerd[1510]: time="2025-09-09T00:02:57.309458412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:02:57.310798 containerd[1510]: time="2025-09-09T00:02:57.310551012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 00:02:57.312257 containerd[1510]: time="2025-09-09T00:02:57.312218151Z" level=info msg="CreateContainer within sandbox \"a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:02:57.329603 containerd[1510]: time="2025-09-09T00:02:57.329538087Z" level=info msg="CreateContainer within sandbox \"a3430f0d71ca9fa478269c0300062a1749581cc3e92bab0964558620695a07f7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1b167d32f6f5033a1bf464579ec26b6639a54d09ba82e2676b679cd9335ca719\"" Sep 9 00:02:57.330324 containerd[1510]: time="2025-09-09T00:02:57.330300278Z" level=info msg="StartContainer for \"1b167d32f6f5033a1bf464579ec26b6639a54d09ba82e2676b679cd9335ca719\"" Sep 9 00:02:57.362339 systemd[1]: Started cri-containerd-1b167d32f6f5033a1bf464579ec26b6639a54d09ba82e2676b679cd9335ca719.scope - libcontainer container 1b167d32f6f5033a1bf464579ec26b6639a54d09ba82e2676b679cd9335ca719. Sep 9 00:02:57.421766 containerd[1510]: time="2025-09-09T00:02:57.421715828Z" level=info msg="StartContainer for \"1b167d32f6f5033a1bf464579ec26b6639a54d09ba82e2676b679cd9335ca719\" returns successfully" Sep 9 00:02:58.702161 kubelet[2630]: I0909 00:02:58.701418 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-68c5d8b85b-fq8gn" podStartSLOduration=26.661391847 podStartE2EDuration="32.701402581s" podCreationTimestamp="2025-09-09 00:02:26 +0000 UTC" firstStartedPulling="2025-09-09 00:02:50.771650688 +0000 UTC m=+41.439221509" lastFinishedPulling="2025-09-09 00:02:56.811661422 +0000 UTC m=+47.479232243" observedRunningTime="2025-09-09 00:02:58.701368067 +0000 UTC m=+49.368938888" watchObservedRunningTime="2025-09-09 00:02:58.701402581 +0000 UTC m=+49.368973402" Sep 9 00:02:58.934540 kubelet[2630]: I0909 00:02:58.934503 2630 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:02:59.109448 kubelet[2630]: I0909 00:02:59.108565 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79646b996b-cw46r" podStartSLOduration=29.627210522 podStartE2EDuration="36.108519041s" podCreationTimestamp="2025-09-09 00:02:23 +0000 UTC" firstStartedPulling="2025-09-09 00:02:50.828930288 +0000 UTC m=+41.496501109" lastFinishedPulling="2025-09-09 00:02:57.310238807 +0000 UTC m=+47.977809628" observedRunningTime="2025-09-09 00:02:58.966605751 +0000 UTC m=+49.634176602" watchObservedRunningTime="2025-09-09 00:02:59.108519041 +0000 UTC m=+49.776089872" Sep 9 00:03:00.179332 containerd[1510]: time="2025-09-09T00:03:00.179275483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:00.180309 containerd[1510]: time="2025-09-09T00:03:00.180263437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 9 00:03:00.181662 containerd[1510]: time="2025-09-09T00:03:00.181622497Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:00.184372 containerd[1510]: time="2025-09-09T00:03:00.184324588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:00.184924 containerd[1510]: time="2025-09-09T00:03:00.184899126Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.874314982s" Sep 9 00:03:00.184966 containerd[1510]: time="2025-09-09T00:03:00.184926888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 9 00:03:00.185948 containerd[1510]: time="2025-09-09T00:03:00.185919090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 00:03:00.187097 containerd[1510]: time="2025-09-09T00:03:00.186839398Z" level=info msg="CreateContainer within sandbox \"fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 00:03:00.202798 containerd[1510]: time="2025-09-09T00:03:00.202742932Z" level=info msg="CreateContainer within sandbox \"fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1923a36a608245cfeb22c83506796eeecce07b3cd5c79c0cd4726c9a900ca8e0\"" Sep 9 00:03:00.203282 containerd[1510]: time="2025-09-09T00:03:00.203261876Z" level=info msg="StartContainer for \"1923a36a608245cfeb22c83506796eeecce07b3cd5c79c0cd4726c9a900ca8e0\"" Sep 9 00:03:00.238171 systemd[1]: Started cri-containerd-1923a36a608245cfeb22c83506796eeecce07b3cd5c79c0cd4726c9a900ca8e0.scope - libcontainer container 1923a36a608245cfeb22c83506796eeecce07b3cd5c79c0cd4726c9a900ca8e0. Sep 9 00:03:00.280681 containerd[1510]: time="2025-09-09T00:03:00.280622443Z" level=info msg="StartContainer for \"1923a36a608245cfeb22c83506796eeecce07b3cd5c79c0cd4726c9a900ca8e0\" returns successfully" Sep 9 00:03:01.157423 systemd[1]: Started sshd@9-10.0.0.133:22-10.0.0.1:54522.service - OpenSSH per-connection server daemon (10.0.0.1:54522). Sep 9 00:03:01.207726 sshd[5797]: Accepted publickey for core from 10.0.0.1 port 54522 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:03:01.209466 sshd-session[5797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:03:01.213797 systemd-logind[1493]: New session 10 of user core. Sep 9 00:03:01.229169 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:03:01.354235 sshd[5799]: Connection closed by 10.0.0.1 port 54522 Sep 9 00:03:01.354564 sshd-session[5797]: pam_unix(sshd:session): session closed for user core Sep 9 00:03:01.358376 systemd[1]: sshd@9-10.0.0.133:22-10.0.0.1:54522.service: Deactivated successfully. Sep 9 00:03:01.360607 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:03:01.361393 systemd-logind[1493]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:03:01.362366 systemd-logind[1493]: Removed session 10. Sep 9 00:03:02.113284 containerd[1510]: time="2025-09-09T00:03:02.113213870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:02.114677 containerd[1510]: time="2025-09-09T00:03:02.114551469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 9 00:03:02.116099 containerd[1510]: time="2025-09-09T00:03:02.116059457Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:02.118549 containerd[1510]: time="2025-09-09T00:03:02.118507946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:02.119313 containerd[1510]: time="2025-09-09T00:03:02.119279829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.933332416s" Sep 9 00:03:02.119313 containerd[1510]: time="2025-09-09T00:03:02.119311230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 9 00:03:02.120368 containerd[1510]: time="2025-09-09T00:03:02.120340872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 00:03:02.123071 containerd[1510]: time="2025-09-09T00:03:02.123014666Z" level=info msg="CreateContainer within sandbox \"d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 00:03:02.151867 containerd[1510]: time="2025-09-09T00:03:02.151818929Z" level=info msg="CreateContainer within sandbox \"d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8a94af56ac515bf38d7f46d3d54429eac29df006ca511744aabc522a42df68c1\"" Sep 9 00:03:02.152363 containerd[1510]: time="2025-09-09T00:03:02.152337012Z" level=info msg="StartContainer for \"8a94af56ac515bf38d7f46d3d54429eac29df006ca511744aabc522a42df68c1\"" Sep 9 00:03:02.190277 systemd[1]: Started cri-containerd-8a94af56ac515bf38d7f46d3d54429eac29df006ca511744aabc522a42df68c1.scope - libcontainer container 8a94af56ac515bf38d7f46d3d54429eac29df006ca511744aabc522a42df68c1. Sep 9 00:03:02.226422 containerd[1510]: time="2025-09-09T00:03:02.226369398Z" level=info msg="StartContainer for \"8a94af56ac515bf38d7f46d3d54429eac29df006ca511744aabc522a42df68c1\" returns successfully" Sep 9 00:03:04.648944 kubelet[2630]: I0909 00:03:04.648890 2630 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:03:05.048660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount196771843.mount: Deactivated successfully. Sep 9 00:03:05.895634 containerd[1510]: time="2025-09-09T00:03:05.895585221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:05.896563 containerd[1510]: time="2025-09-09T00:03:05.896532460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 9 00:03:05.897901 containerd[1510]: time="2025-09-09T00:03:05.897864931Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:05.900491 containerd[1510]: time="2025-09-09T00:03:05.900461524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:05.905954 containerd[1510]: time="2025-09-09T00:03:05.905917306Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.785549792s" Sep 9 00:03:05.905954 containerd[1510]: time="2025-09-09T00:03:05.905947854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 9 00:03:05.906861 containerd[1510]: time="2025-09-09T00:03:05.906840027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 00:03:05.907891 containerd[1510]: time="2025-09-09T00:03:05.907865286Z" level=info msg="CreateContainer within sandbox \"2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 00:03:05.921707 containerd[1510]: time="2025-09-09T00:03:05.921658252Z" level=info msg="CreateContainer within sandbox \"2697ba7b32efeab97556dc5bc92b9aea5254d789ed8a314f64bd28c400a19c1a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f430d7d1bfe6db11249478c8e500cc45d046c57c64b420a85da73d7c86331eb3\"" Sep 9 00:03:05.922203 containerd[1510]: time="2025-09-09T00:03:05.922164369Z" level=info msg="StartContainer for \"f430d7d1bfe6db11249478c8e500cc45d046c57c64b420a85da73d7c86331eb3\"" Sep 9 00:03:05.984630 systemd[1]: run-containerd-runc-k8s.io-f430d7d1bfe6db11249478c8e500cc45d046c57c64b420a85da73d7c86331eb3-runc.r17TnV.mount: Deactivated successfully. Sep 9 00:03:05.997173 systemd[1]: Started cri-containerd-f430d7d1bfe6db11249478c8e500cc45d046c57c64b420a85da73d7c86331eb3.scope - libcontainer container f430d7d1bfe6db11249478c8e500cc45d046c57c64b420a85da73d7c86331eb3. Sep 9 00:03:06.133738 containerd[1510]: time="2025-09-09T00:03:06.133674041Z" level=info msg="StartContainer for \"f430d7d1bfe6db11249478c8e500cc45d046c57c64b420a85da73d7c86331eb3\" returns successfully" Sep 9 00:03:06.373683 systemd[1]: Started sshd@10-10.0.0.133:22-10.0.0.1:54534.service - OpenSSH per-connection server daemon (10.0.0.1:54534). Sep 9 00:03:06.526924 sshd[5914]: Accepted publickey for core from 10.0.0.1 port 54534 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:03:06.528986 sshd-session[5914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:03:06.533744 systemd-logind[1493]: New session 11 of user core. Sep 9 00:03:06.541268 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:03:07.297984 sshd[5916]: Connection closed by 10.0.0.1 port 54534 Sep 9 00:03:07.298402 sshd-session[5914]: pam_unix(sshd:session): session closed for user core Sep 9 00:03:07.303410 systemd[1]: sshd@10-10.0.0.133:22-10.0.0.1:54534.service: Deactivated successfully. Sep 9 00:03:07.306378 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:03:07.307260 systemd-logind[1493]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:03:07.308255 systemd-logind[1493]: Removed session 11. Sep 9 00:03:09.415912 containerd[1510]: time="2025-09-09T00:03:09.415864329Z" level=info msg="StopPodSandbox for \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\"" Sep 9 00:03:09.416535 containerd[1510]: time="2025-09-09T00:03:09.415996262Z" level=info msg="TearDown network for sandbox \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\" successfully" Sep 9 00:03:09.416535 containerd[1510]: time="2025-09-09T00:03:09.416009588Z" level=info msg="StopPodSandbox for \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\" returns successfully" Sep 9 00:03:09.467443 containerd[1510]: time="2025-09-09T00:03:09.467382635Z" level=info msg="RemovePodSandbox for \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\"" Sep 9 00:03:09.471123 containerd[1510]: time="2025-09-09T00:03:09.471098377Z" level=info msg="Forcibly stopping sandbox \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\"" Sep 9 00:03:09.485014 containerd[1510]: time="2025-09-09T00:03:09.471208890Z" level=info msg="TearDown network for sandbox \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\" successfully" Sep 9 00:03:09.771669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount495476195.mount: Deactivated successfully. Sep 9 00:03:09.784085 containerd[1510]: time="2025-09-09T00:03:09.784042390Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:09.784186 containerd[1510]: time="2025-09-09T00:03:09.784145809Z" level=info msg="RemovePodSandbox \"0d5b00aebe6e4071f23e10b68b97a95ef2331bf747495e24c184557b9c4b09b3\" returns successfully" Sep 9 00:03:09.789914 containerd[1510]: time="2025-09-09T00:03:09.784707218Z" level=info msg="StopPodSandbox for \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\"" Sep 9 00:03:09.789914 containerd[1510]: time="2025-09-09T00:03:09.784806490Z" level=info msg="TearDown network for sandbox \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\" successfully" Sep 9 00:03:09.789914 containerd[1510]: time="2025-09-09T00:03:09.784815828Z" level=info msg="StopPodSandbox for \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\" returns successfully" Sep 9 00:03:09.789914 containerd[1510]: time="2025-09-09T00:03:09.785269792Z" level=info msg="RemovePodSandbox for \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\"" Sep 9 00:03:09.789914 containerd[1510]: time="2025-09-09T00:03:09.785310410Z" level=info msg="Forcibly stopping sandbox \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\"" Sep 9 00:03:09.789914 containerd[1510]: time="2025-09-09T00:03:09.785414751Z" level=info msg="TearDown network for sandbox \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\" successfully" Sep 9 00:03:10.192456 containerd[1510]: time="2025-09-09T00:03:10.192379511Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.192741 containerd[1510]: time="2025-09-09T00:03:10.192488251Z" level=info msg="RemovePodSandbox \"f76f6079621d5413bec474547daaee725da0e1ab930e1975b223c86e9edf623c\" returns successfully" Sep 9 00:03:10.193521 containerd[1510]: time="2025-09-09T00:03:10.193472063Z" level=info msg="StopPodSandbox for \"e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661\"" Sep 9 00:03:10.193737 containerd[1510]: time="2025-09-09T00:03:10.193615058Z" level=info msg="TearDown network for sandbox \"e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661\" successfully" Sep 9 00:03:10.193737 containerd[1510]: time="2025-09-09T00:03:10.193626791Z" level=info msg="StopPodSandbox for \"e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661\" returns successfully" Sep 9 00:03:10.194269 containerd[1510]: time="2025-09-09T00:03:10.194211004Z" level=info msg="RemovePodSandbox for \"e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661\"" Sep 9 00:03:10.194269 containerd[1510]: time="2025-09-09T00:03:10.194256532Z" level=info msg="Forcibly stopping sandbox \"e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661\"" Sep 9 00:03:10.194429 containerd[1510]: time="2025-09-09T00:03:10.194372375Z" level=info msg="TearDown network for sandbox \"e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661\" successfully" Sep 9 00:03:10.270398 containerd[1510]: time="2025-09-09T00:03:10.270316854Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.270398 containerd[1510]: time="2025-09-09T00:03:10.270405224Z" level=info msg="RemovePodSandbox \"e27a4b9bce3ed34298fe8dc748995c6d1b64e57fe55639707ceedc65d7440661\" returns successfully" Sep 9 00:03:10.271073 containerd[1510]: time="2025-09-09T00:03:10.271011230Z" level=info msg="StopPodSandbox for \"fbf8df7f2f60e92fb3ed7acf80d710cf1c98cd4d9b8f2931852bfd2263feb08a\"" Sep 9 00:03:10.271223 containerd[1510]: time="2025-09-09T00:03:10.271173121Z" level=info msg="TearDown network for sandbox \"fbf8df7f2f60e92fb3ed7acf80d710cf1c98cd4d9b8f2931852bfd2263feb08a\" successfully" Sep 9 00:03:10.271223 containerd[1510]: time="2025-09-09T00:03:10.271192789Z" level=info msg="StopPodSandbox for \"fbf8df7f2f60e92fb3ed7acf80d710cf1c98cd4d9b8f2931852bfd2263feb08a\" returns successfully" Sep 9 00:03:10.273064 containerd[1510]: time="2025-09-09T00:03:10.271532032Z" level=info msg="RemovePodSandbox for \"fbf8df7f2f60e92fb3ed7acf80d710cf1c98cd4d9b8f2931852bfd2263feb08a\"" Sep 9 00:03:10.273064 containerd[1510]: time="2025-09-09T00:03:10.271564223Z" level=info msg="Forcibly stopping sandbox \"fbf8df7f2f60e92fb3ed7acf80d710cf1c98cd4d9b8f2931852bfd2263feb08a\"" Sep 9 00:03:10.273064 containerd[1510]: time="2025-09-09T00:03:10.271687841Z" level=info msg="TearDown network for sandbox \"fbf8df7f2f60e92fb3ed7acf80d710cf1c98cd4d9b8f2931852bfd2263feb08a\" successfully" Sep 9 00:03:10.322937 containerd[1510]: time="2025-09-09T00:03:10.322802474Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fbf8df7f2f60e92fb3ed7acf80d710cf1c98cd4d9b8f2931852bfd2263feb08a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.322937 containerd[1510]: time="2025-09-09T00:03:10.322913397Z" level=info msg="RemovePodSandbox \"fbf8df7f2f60e92fb3ed7acf80d710cf1c98cd4d9b8f2931852bfd2263feb08a\" returns successfully" Sep 9 00:03:10.323495 containerd[1510]: time="2025-09-09T00:03:10.323455540Z" level=info msg="StopPodSandbox for \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\"" Sep 9 00:03:10.323661 containerd[1510]: time="2025-09-09T00:03:10.323621950Z" level=info msg="TearDown network for sandbox \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\" successfully" Sep 9 00:03:10.323661 containerd[1510]: time="2025-09-09T00:03:10.323644102Z" level=info msg="StopPodSandbox for \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\" returns successfully" Sep 9 00:03:10.323967 containerd[1510]: time="2025-09-09T00:03:10.323941936Z" level=info msg="RemovePodSandbox for \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\"" Sep 9 00:03:10.324022 containerd[1510]: time="2025-09-09T00:03:10.323971011Z" level=info msg="Forcibly stopping sandbox \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\"" Sep 9 00:03:10.324134 containerd[1510]: time="2025-09-09T00:03:10.324085271Z" level=info msg="TearDown network for sandbox \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\" successfully" Sep 9 00:03:10.331227 containerd[1510]: time="2025-09-09T00:03:10.331159006Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.331227 containerd[1510]: time="2025-09-09T00:03:10.331233059Z" level=info msg="RemovePodSandbox \"3577b5f0689a9b014873f9ffa541b41296a332e7e174ae4f75e1e9706c3c61ef\" returns successfully" Sep 9 00:03:10.331751 containerd[1510]: time="2025-09-09T00:03:10.331708864Z" level=info msg="StopPodSandbox for \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\"" Sep 9 00:03:10.333120 containerd[1510]: time="2025-09-09T00:03:10.333066575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:10.334524 containerd[1510]: time="2025-09-09T00:03:10.334465797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 9 00:03:10.336155 containerd[1510]: time="2025-09-09T00:03:10.336122693Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:10.356160 containerd[1510]: time="2025-09-09T00:03:10.356015204Z" level=info msg="TearDown network for sandbox \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\" successfully" Sep 9 00:03:10.356160 containerd[1510]: time="2025-09-09T00:03:10.356121308Z" level=info msg="StopPodSandbox for \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\" returns successfully" Sep 9 00:03:10.356754 containerd[1510]: time="2025-09-09T00:03:10.356722926Z" level=info msg="RemovePodSandbox for \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\"" Sep 9 00:03:10.356818 containerd[1510]: time="2025-09-09T00:03:10.356766820Z" level=info msg="Forcibly stopping sandbox \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\"" Sep 9 00:03:10.356936 containerd[1510]: time="2025-09-09T00:03:10.356889536Z" level=info msg="TearDown network for sandbox \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\" successfully" Sep 9 00:03:10.427850 containerd[1510]: time="2025-09-09T00:03:10.427755809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:10.446625 containerd[1510]: time="2025-09-09T00:03:10.446367917Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.539496469s" Sep 9 00:03:10.446625 containerd[1510]: time="2025-09-09T00:03:10.446422783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 9 00:03:10.467117 containerd[1510]: time="2025-09-09T00:03:10.465852543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 00:03:10.468103 containerd[1510]: time="2025-09-09T00:03:10.468064137Z" level=info msg="CreateContainer within sandbox \"fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 00:03:10.780957 containerd[1510]: time="2025-09-09T00:03:10.780795447Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.780957 containerd[1510]: time="2025-09-09T00:03:10.780903575Z" level=info msg="RemovePodSandbox \"b6ab3930a48eaf56341f0ec34ce1db291a4082a652f9f90e42171ab4a43756f6\" returns successfully" Sep 9 00:03:10.781502 containerd[1510]: time="2025-09-09T00:03:10.781467189Z" level=info msg="StopPodSandbox for \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\"" Sep 9 00:03:10.781601 containerd[1510]: time="2025-09-09T00:03:10.781583012Z" level=info msg="TearDown network for sandbox \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\" successfully" Sep 9 00:03:10.781601 containerd[1510]: time="2025-09-09T00:03:10.781598351Z" level=info msg="StopPodSandbox for \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\" returns successfully" Sep 9 00:03:10.781899 containerd[1510]: time="2025-09-09T00:03:10.781860736Z" level=info msg="RemovePodSandbox for \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\"" Sep 9 00:03:10.782130 containerd[1510]: time="2025-09-09T00:03:10.781903408Z" level=info msg="Forcibly stopping sandbox \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\"" Sep 9 00:03:10.782130 containerd[1510]: time="2025-09-09T00:03:10.781997409Z" level=info msg="TearDown network for sandbox \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\" successfully" Sep 9 00:03:10.791319 containerd[1510]: time="2025-09-09T00:03:10.791124905Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.791319 containerd[1510]: time="2025-09-09T00:03:10.791279412Z" level=info msg="RemovePodSandbox \"6b614423a98f999ebbe4c84f74ae5bb4541fd54f2bcb6e881ed40bda9ec138af\" returns successfully" Sep 9 00:03:10.792276 containerd[1510]: time="2025-09-09T00:03:10.792233687Z" level=info msg="StopPodSandbox for \"44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033\"" Sep 9 00:03:10.792455 containerd[1510]: time="2025-09-09T00:03:10.792366523Z" level=info msg="TearDown network for sandbox \"44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033\" successfully" Sep 9 00:03:10.792455 containerd[1510]: time="2025-09-09T00:03:10.792379217Z" level=info msg="StopPodSandbox for \"44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033\" returns successfully" Sep 9 00:03:10.792878 containerd[1510]: time="2025-09-09T00:03:10.792834843Z" level=info msg="RemovePodSandbox for \"44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033\"" Sep 9 00:03:10.792878 containerd[1510]: time="2025-09-09T00:03:10.792862958Z" level=info msg="Forcibly stopping sandbox \"44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033\"" Sep 9 00:03:10.793049 containerd[1510]: time="2025-09-09T00:03:10.792952500Z" level=info msg="TearDown network for sandbox \"44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033\" successfully" Sep 9 00:03:10.799410 containerd[1510]: time="2025-09-09T00:03:10.799335026Z" level=info msg="CreateContainer within sandbox \"fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"cc585c93d4795cf2632671e53662a20b073bb352cb7622ed8a514dd4c7962bdf\"" Sep 9 00:03:10.799906 containerd[1510]: time="2025-09-09T00:03:10.799763810Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.799906 containerd[1510]: time="2025-09-09T00:03:10.799817714Z" level=info msg="RemovePodSandbox \"44f4edd25335a1f27037936f9a8945b2b4b577e7090925abb63e14fc272e2033\" returns successfully" Sep 9 00:03:10.799997 containerd[1510]: time="2025-09-09T00:03:10.799924710Z" level=info msg="StartContainer for \"cc585c93d4795cf2632671e53662a20b073bb352cb7622ed8a514dd4c7962bdf\"" Sep 9 00:03:10.800141 containerd[1510]: time="2025-09-09T00:03:10.800118582Z" level=info msg="StopPodSandbox for \"b284f2e2d3c104f1568e625f86aca83bcfc55ac0bc7de454fdd6fa93c5b702cf\"" Sep 9 00:03:10.800253 containerd[1510]: time="2025-09-09T00:03:10.800231881Z" level=info msg="TearDown network for sandbox \"b284f2e2d3c104f1568e625f86aca83bcfc55ac0bc7de454fdd6fa93c5b702cf\" successfully" Sep 9 00:03:10.800253 containerd[1510]: time="2025-09-09T00:03:10.800250556Z" level=info msg="StopPodSandbox for \"b284f2e2d3c104f1568e625f86aca83bcfc55ac0bc7de454fdd6fa93c5b702cf\" returns successfully" Sep 9 00:03:10.800538 containerd[1510]: time="2025-09-09T00:03:10.800517139Z" level=info msg="RemovePodSandbox for \"b284f2e2d3c104f1568e625f86aca83bcfc55ac0bc7de454fdd6fa93c5b702cf\"" Sep 9 00:03:10.800607 containerd[1510]: time="2025-09-09T00:03:10.800542968Z" level=info msg="Forcibly stopping sandbox \"b284f2e2d3c104f1568e625f86aca83bcfc55ac0bc7de454fdd6fa93c5b702cf\"" Sep 9 00:03:10.800669 containerd[1510]: time="2025-09-09T00:03:10.800623504Z" level=info msg="TearDown network for sandbox \"b284f2e2d3c104f1568e625f86aca83bcfc55ac0bc7de454fdd6fa93c5b702cf\" successfully" Sep 9 00:03:10.806144 containerd[1510]: time="2025-09-09T00:03:10.806084728Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b284f2e2d3c104f1568e625f86aca83bcfc55ac0bc7de454fdd6fa93c5b702cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.806367 containerd[1510]: time="2025-09-09T00:03:10.806176544Z" level=info msg="RemovePodSandbox \"b284f2e2d3c104f1568e625f86aca83bcfc55ac0bc7de454fdd6fa93c5b702cf\" returns successfully" Sep 9 00:03:10.806750 containerd[1510]: time="2025-09-09T00:03:10.806717044Z" level=info msg="StopPodSandbox for \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\"" Sep 9 00:03:10.806878 containerd[1510]: time="2025-09-09T00:03:10.806851894Z" level=info msg="TearDown network for sandbox \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\" successfully" Sep 9 00:03:10.806878 containerd[1510]: time="2025-09-09T00:03:10.806872022Z" level=info msg="StopPodSandbox for \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\" returns successfully" Sep 9 00:03:10.807252 containerd[1510]: time="2025-09-09T00:03:10.807209752Z" level=info msg="RemovePodSandbox for \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\"" Sep 9 00:03:10.807252 containerd[1510]: time="2025-09-09T00:03:10.807236593Z" level=info msg="Forcibly stopping sandbox \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\"" Sep 9 00:03:10.807454 containerd[1510]: time="2025-09-09T00:03:10.807316487Z" level=info msg="TearDown network for sandbox \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\" successfully" Sep 9 00:03:10.812366 containerd[1510]: time="2025-09-09T00:03:10.812309370Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.812491 containerd[1510]: time="2025-09-09T00:03:10.812387370Z" level=info msg="RemovePodSandbox \"63ccae1df089407ceec34b6b0e8c68262147eba8ba8dc80214beb7ec4055b9b3\" returns successfully" Sep 9 00:03:10.814053 containerd[1510]: time="2025-09-09T00:03:10.812845010Z" level=info msg="StopPodSandbox for \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\"" Sep 9 00:03:10.814053 containerd[1510]: time="2025-09-09T00:03:10.812992073Z" level=info msg="TearDown network for sandbox \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\" successfully" Sep 9 00:03:10.814053 containerd[1510]: time="2025-09-09T00:03:10.813009377Z" level=info msg="StopPodSandbox for \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\" returns successfully" Sep 9 00:03:10.814053 containerd[1510]: time="2025-09-09T00:03:10.813353899Z" level=info msg="RemovePodSandbox for \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\"" Sep 9 00:03:10.814053 containerd[1510]: time="2025-09-09T00:03:10.813377634Z" level=info msg="Forcibly stopping sandbox \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\"" Sep 9 00:03:10.814053 containerd[1510]: time="2025-09-09T00:03:10.813463890Z" level=info msg="TearDown network for sandbox \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\" successfully" Sep 9 00:03:10.821558 containerd[1510]: time="2025-09-09T00:03:10.821488295Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.821754 containerd[1510]: time="2025-09-09T00:03:10.821592946Z" level=info msg="RemovePodSandbox \"e7bfb4d4fd94e159d2d7bda916cfdab41be34f70cb4545a8ce2df2a7d7c0b473\" returns successfully" Sep 9 00:03:10.822213 containerd[1510]: time="2025-09-09T00:03:10.822187329Z" level=info msg="StopPodSandbox for \"cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da\"" Sep 9 00:03:10.822331 containerd[1510]: time="2025-09-09T00:03:10.822308953Z" level=info msg="TearDown network for sandbox \"cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da\" successfully" Sep 9 00:03:10.822331 containerd[1510]: time="2025-09-09T00:03:10.822327188Z" level=info msg="StopPodSandbox for \"cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da\" returns successfully" Sep 9 00:03:10.822637 containerd[1510]: time="2025-09-09T00:03:10.822613889Z" level=info msg="RemovePodSandbox for \"cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da\"" Sep 9 00:03:10.822681 containerd[1510]: time="2025-09-09T00:03:10.822638287Z" level=info msg="Forcibly stopping sandbox \"cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da\"" Sep 9 00:03:10.822754 containerd[1510]: time="2025-09-09T00:03:10.822709223Z" level=info msg="TearDown network for sandbox \"cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da\" successfully" Sep 9 00:03:10.828081 containerd[1510]: time="2025-09-09T00:03:10.827851253Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.828081 containerd[1510]: time="2025-09-09T00:03:10.827948310Z" level=info msg="RemovePodSandbox \"cb0043d3a76c3269db8066e2d9ee26d5214d6ac7a5a6952aebcf946cf71588da\" returns successfully" Sep 9 00:03:10.828644 containerd[1510]: time="2025-09-09T00:03:10.828500491Z" level=info msg="StopPodSandbox for \"34c375a85cdaeed11a9a9395614046f7fd8cb7ec5d150322686d5030aba8cb5e\"" Sep 9 00:03:10.828687 containerd[1510]: time="2025-09-09T00:03:10.828640751Z" level=info msg="TearDown network for sandbox \"34c375a85cdaeed11a9a9395614046f7fd8cb7ec5d150322686d5030aba8cb5e\" successfully" Sep 9 00:03:10.828687 containerd[1510]: time="2025-09-09T00:03:10.828656602Z" level=info msg="StopPodSandbox for \"34c375a85cdaeed11a9a9395614046f7fd8cb7ec5d150322686d5030aba8cb5e\" returns successfully" Sep 9 00:03:10.829201 containerd[1510]: time="2025-09-09T00:03:10.829179378Z" level=info msg="RemovePodSandbox for \"34c375a85cdaeed11a9a9395614046f7fd8cb7ec5d150322686d5030aba8cb5e\"" Sep 9 00:03:10.829272 containerd[1510]: time="2025-09-09T00:03:10.829260494Z" level=info msg="Forcibly stopping sandbox \"34c375a85cdaeed11a9a9395614046f7fd8cb7ec5d150322686d5030aba8cb5e\"" Sep 9 00:03:10.829447 containerd[1510]: time="2025-09-09T00:03:10.829402417Z" level=info msg="TearDown network for sandbox \"34c375a85cdaeed11a9a9395614046f7fd8cb7ec5d150322686d5030aba8cb5e\" successfully" Sep 9 00:03:10.836354 containerd[1510]: time="2025-09-09T00:03:10.835688247Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"34c375a85cdaeed11a9a9395614046f7fd8cb7ec5d150322686d5030aba8cb5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.840226 containerd[1510]: time="2025-09-09T00:03:10.837206507Z" level=info msg="RemovePodSandbox \"34c375a85cdaeed11a9a9395614046f7fd8cb7ec5d150322686d5030aba8cb5e\" returns successfully" Sep 9 00:03:10.840226 containerd[1510]: time="2025-09-09T00:03:10.837753038Z" level=info msg="StopPodSandbox for \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\"" Sep 9 00:03:10.840226 containerd[1510]: time="2025-09-09T00:03:10.837849443Z" level=info msg="TearDown network for sandbox \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\" successfully" Sep 9 00:03:10.840226 containerd[1510]: time="2025-09-09T00:03:10.837858280Z" level=info msg="StopPodSandbox for \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\" returns successfully" Sep 9 00:03:10.840226 containerd[1510]: time="2025-09-09T00:03:10.838270844Z" level=info msg="RemovePodSandbox for \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\"" Sep 9 00:03:10.840226 containerd[1510]: time="2025-09-09T00:03:10.838291203Z" level=info msg="Forcibly stopping sandbox \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\"" Sep 9 00:03:10.840226 containerd[1510]: time="2025-09-09T00:03:10.838368392Z" level=info msg="TearDown network for sandbox \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\" successfully" Sep 9 00:03:10.840640 systemd[1]: run-containerd-runc-k8s.io-cc585c93d4795cf2632671e53662a20b073bb352cb7622ed8a514dd4c7962bdf-runc.o5c816.mount: Deactivated successfully. Sep 9 00:03:10.843388 containerd[1510]: time="2025-09-09T00:03:10.842965433Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.843388 containerd[1510]: time="2025-09-09T00:03:10.843026200Z" level=info msg="RemovePodSandbox \"aef9c05112943bf51174b54b3d085768f10ee4d0bc16888fdf640bfcc886cf8f\" returns successfully" Sep 9 00:03:10.843527 containerd[1510]: time="2025-09-09T00:03:10.843497287Z" level=info msg="StopPodSandbox for \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\"" Sep 9 00:03:10.844690 containerd[1510]: time="2025-09-09T00:03:10.843589574Z" level=info msg="TearDown network for sandbox \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\" successfully" Sep 9 00:03:10.844690 containerd[1510]: time="2025-09-09T00:03:10.843603581Z" level=info msg="StopPodSandbox for \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\" returns successfully" Sep 9 00:03:10.844690 containerd[1510]: time="2025-09-09T00:03:10.843849443Z" level=info msg="RemovePodSandbox for \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\"" Sep 9 00:03:10.844690 containerd[1510]: time="2025-09-09T00:03:10.843872709Z" level=info msg="Forcibly stopping sandbox \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\"" Sep 9 00:03:10.844690 containerd[1510]: time="2025-09-09T00:03:10.843959395Z" level=info msg="TearDown network for sandbox \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\" successfully" Sep 9 00:03:10.849075 containerd[1510]: time="2025-09-09T00:03:10.849024137Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.849181 containerd[1510]: time="2025-09-09T00:03:10.849094161Z" level=info msg="RemovePodSandbox \"74ed2b5d9b85c3adf2baac8f6be22b6d6e6049e2b03e17c4636d8e73c5424ac1\" returns successfully" Sep 9 00:03:10.849497 containerd[1510]: time="2025-09-09T00:03:10.849472058Z" level=info msg="StopPodSandbox for \"5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8\"" Sep 9 00:03:10.852317 systemd[1]: Started cri-containerd-cc585c93d4795cf2632671e53662a20b073bb352cb7622ed8a514dd4c7962bdf.scope - libcontainer container cc585c93d4795cf2632671e53662a20b073bb352cb7622ed8a514dd4c7962bdf. Sep 9 00:03:10.855506 containerd[1510]: time="2025-09-09T00:03:10.849564315Z" level=info msg="TearDown network for sandbox \"5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8\" successfully" Sep 9 00:03:10.855506 containerd[1510]: time="2025-09-09T00:03:10.855487297Z" level=info msg="StopPodSandbox for \"5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8\" returns successfully" Sep 9 00:03:10.856135 containerd[1510]: time="2025-09-09T00:03:10.856110176Z" level=info msg="RemovePodSandbox for \"5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8\"" Sep 9 00:03:10.856306 containerd[1510]: time="2025-09-09T00:03:10.856267157Z" level=info msg="Forcibly stopping sandbox \"5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8\"" Sep 9 00:03:10.858372 containerd[1510]: time="2025-09-09T00:03:10.856481450Z" level=info msg="TearDown network for sandbox \"5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8\" successfully" Sep 9 00:03:10.861588 containerd[1510]: time="2025-09-09T00:03:10.861548586Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.861670 containerd[1510]: time="2025-09-09T00:03:10.861628158Z" level=info msg="RemovePodSandbox \"5c004f51b63f02fbc9b60b03dde6d10f501dcfadc879f7b1c4797d8b435b74f8\" returns successfully" Sep 9 00:03:10.862286 containerd[1510]: time="2025-09-09T00:03:10.862247400Z" level=info msg="StopPodSandbox for \"27644e9d9360d778ba1bb9c79867001b5bfef86b0ee1188b96ae626e5b2b656b\"" Sep 9 00:03:10.862408 containerd[1510]: time="2025-09-09T00:03:10.862374685Z" level=info msg="TearDown network for sandbox \"27644e9d9360d778ba1bb9c79867001b5bfef86b0ee1188b96ae626e5b2b656b\" successfully" Sep 9 00:03:10.862408 containerd[1510]: time="2025-09-09T00:03:10.862388051Z" level=info msg="StopPodSandbox for \"27644e9d9360d778ba1bb9c79867001b5bfef86b0ee1188b96ae626e5b2b656b\" returns successfully" Sep 9 00:03:10.862691 containerd[1510]: time="2025-09-09T00:03:10.862646557Z" level=info msg="RemovePodSandbox for \"27644e9d9360d778ba1bb9c79867001b5bfef86b0ee1188b96ae626e5b2b656b\"" Sep 9 00:03:10.862691 containerd[1510]: time="2025-09-09T00:03:10.862673910Z" level=info msg="Forcibly stopping sandbox \"27644e9d9360d778ba1bb9c79867001b5bfef86b0ee1188b96ae626e5b2b656b\"" Sep 9 00:03:10.862907 containerd[1510]: time="2025-09-09T00:03:10.862782099Z" level=info msg="TearDown network for sandbox \"27644e9d9360d778ba1bb9c79867001b5bfef86b0ee1188b96ae626e5b2b656b\" successfully" Sep 9 00:03:10.868216 containerd[1510]: time="2025-09-09T00:03:10.867989975Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"27644e9d9360d778ba1bb9c79867001b5bfef86b0ee1188b96ae626e5b2b656b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.868216 containerd[1510]: time="2025-09-09T00:03:10.868126498Z" level=info msg="RemovePodSandbox \"27644e9d9360d778ba1bb9c79867001b5bfef86b0ee1188b96ae626e5b2b656b\" returns successfully" Sep 9 00:03:10.868794 containerd[1510]: time="2025-09-09T00:03:10.868767700Z" level=info msg="StopPodSandbox for \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\"" Sep 9 00:03:10.868941 containerd[1510]: time="2025-09-09T00:03:10.868917839Z" level=info msg="TearDown network for sandbox \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\" successfully" Sep 9 00:03:10.868941 containerd[1510]: time="2025-09-09T00:03:10.868935804Z" level=info msg="StopPodSandbox for \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\" returns successfully" Sep 9 00:03:10.871791 containerd[1510]: time="2025-09-09T00:03:10.871759525Z" level=info msg="RemovePodSandbox for \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\"" Sep 9 00:03:10.871867 containerd[1510]: time="2025-09-09T00:03:10.871793000Z" level=info msg="Forcibly stopping sandbox \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\"" Sep 9 00:03:10.872016 containerd[1510]: time="2025-09-09T00:03:10.871878865Z" level=info msg="TearDown network for sandbox \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\" successfully" Sep 9 00:03:10.877376 containerd[1510]: time="2025-09-09T00:03:10.877290543Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.877376 containerd[1510]: time="2025-09-09T00:03:10.877355889Z" level=info msg="RemovePodSandbox \"d7368a6d06496377ba02b3738516f700b07db50e8bf95083514d09051d2e999f\" returns successfully" Sep 9 00:03:10.877731 containerd[1510]: time="2025-09-09T00:03:10.877701414Z" level=info msg="StopPodSandbox for \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\"" Sep 9 00:03:10.877833 containerd[1510]: time="2025-09-09T00:03:10.877808880Z" level=info msg="TearDown network for sandbox \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\" successfully" Sep 9 00:03:10.877833 containerd[1510]: time="2025-09-09T00:03:10.877828288Z" level=info msg="StopPodSandbox for \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\" returns successfully" Sep 9 00:03:10.882057 containerd[1510]: time="2025-09-09T00:03:10.878325053Z" level=info msg="RemovePodSandbox for \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\"" Sep 9 00:03:10.882057 containerd[1510]: time="2025-09-09T00:03:10.878373116Z" level=info msg="Forcibly stopping sandbox \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\"" Sep 9 00:03:10.882057 containerd[1510]: time="2025-09-09T00:03:10.878481163Z" level=info msg="TearDown network for sandbox \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\" successfully" Sep 9 00:03:10.884610 containerd[1510]: time="2025-09-09T00:03:10.884540749Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.884701 containerd[1510]: time="2025-09-09T00:03:10.884676319Z" level=info msg="RemovePodSandbox \"cac6fee0700ccec496d33b455347fd64ff07235218642f37e88506bb2e420d9d\" returns successfully" Sep 9 00:03:10.885132 containerd[1510]: time="2025-09-09T00:03:10.885094794Z" level=info msg="StopPodSandbox for \"0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d\"" Sep 9 00:03:10.885503 containerd[1510]: time="2025-09-09T00:03:10.885472901Z" level=info msg="TearDown network for sandbox \"0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d\" successfully" Sep 9 00:03:10.885503 containerd[1510]: time="2025-09-09T00:03:10.885498089Z" level=info msg="StopPodSandbox for \"0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d\" returns successfully" Sep 9 00:03:10.885806 containerd[1510]: time="2025-09-09T00:03:10.885775263Z" level=info msg="RemovePodSandbox for \"0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d\"" Sep 9 00:03:10.885849 containerd[1510]: time="2025-09-09T00:03:10.885805251Z" level=info msg="Forcibly stopping sandbox \"0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d\"" Sep 9 00:03:10.885956 containerd[1510]: time="2025-09-09T00:03:10.885903530Z" level=info msg="TearDown network for sandbox \"0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d\" successfully" Sep 9 00:03:10.890865 containerd[1510]: time="2025-09-09T00:03:10.890822421Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.890969 containerd[1510]: time="2025-09-09T00:03:10.890898487Z" level=info msg="RemovePodSandbox \"0bb2423d5b1632654b465ec0626bb1c67e7fe1179375569d64024ba26bbcf86d\" returns successfully" Sep 9 00:03:10.891250 containerd[1510]: time="2025-09-09T00:03:10.891220466Z" level=info msg="StopPodSandbox for \"eaab8f23f5581ebece3209fd884f6ddc229c7a7a278b6bb06f6ba96f12e4cbbb\"" Sep 9 00:03:10.891345 containerd[1510]: time="2025-09-09T00:03:10.891325879Z" level=info msg="TearDown network for sandbox \"eaab8f23f5581ebece3209fd884f6ddc229c7a7a278b6bb06f6ba96f12e4cbbb\" successfully" Sep 9 00:03:10.891345 containerd[1510]: time="2025-09-09T00:03:10.891340216Z" level=info msg="StopPodSandbox for \"eaab8f23f5581ebece3209fd884f6ddc229c7a7a278b6bb06f6ba96f12e4cbbb\" returns successfully" Sep 9 00:03:10.891861 containerd[1510]: time="2025-09-09T00:03:10.891817234Z" level=info msg="RemovePodSandbox for \"eaab8f23f5581ebece3209fd884f6ddc229c7a7a278b6bb06f6ba96f12e4cbbb\"" Sep 9 00:03:10.891918 containerd[1510]: time="2025-09-09T00:03:10.891869484Z" level=info msg="Forcibly stopping sandbox \"eaab8f23f5581ebece3209fd884f6ddc229c7a7a278b6bb06f6ba96f12e4cbbb\"" Sep 9 00:03:10.892073 containerd[1510]: time="2025-09-09T00:03:10.892004043Z" level=info msg="TearDown network for sandbox \"eaab8f23f5581ebece3209fd884f6ddc229c7a7a278b6bb06f6ba96f12e4cbbb\" successfully" Sep 9 00:03:10.897859 containerd[1510]: time="2025-09-09T00:03:10.897806012Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eaab8f23f5581ebece3209fd884f6ddc229c7a7a278b6bb06f6ba96f12e4cbbb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.898006 containerd[1510]: time="2025-09-09T00:03:10.897909622Z" level=info msg="RemovePodSandbox \"eaab8f23f5581ebece3209fd884f6ddc229c7a7a278b6bb06f6ba96f12e4cbbb\" returns successfully" Sep 9 00:03:10.898802 containerd[1510]: time="2025-09-09T00:03:10.898302167Z" level=info msg="StopPodSandbox for \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\"" Sep 9 00:03:10.898802 containerd[1510]: time="2025-09-09T00:03:10.898437948Z" level=info msg="TearDown network for sandbox \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\" successfully" Sep 9 00:03:10.898802 containerd[1510]: time="2025-09-09T00:03:10.898452346Z" level=info msg="StopPodSandbox for \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\" returns successfully" Sep 9 00:03:10.898935 containerd[1510]: time="2025-09-09T00:03:10.898916398Z" level=info msg="RemovePodSandbox for \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\"" Sep 9 00:03:10.898978 containerd[1510]: time="2025-09-09T00:03:10.898941717Z" level=info msg="Forcibly stopping sandbox \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\"" Sep 9 00:03:10.899112 containerd[1510]: time="2025-09-09T00:03:10.899058482Z" level=info msg="TearDown network for sandbox \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\" successfully" Sep 9 00:03:10.904521 containerd[1510]: time="2025-09-09T00:03:10.904480991Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.904521 containerd[1510]: time="2025-09-09T00:03:10.904539474Z" level=info msg="RemovePodSandbox \"47fa50ccce95a5ca933272b95a6f5759c107ddac057548c0ea7d1e0f83021ccd\" returns successfully" Sep 9 00:03:10.905241 containerd[1510]: time="2025-09-09T00:03:10.904931647Z" level=info msg="StopPodSandbox for \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\"" Sep 9 00:03:10.905241 containerd[1510]: time="2025-09-09T00:03:10.905104551Z" level=info msg="TearDown network for sandbox \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\" successfully" Sep 9 00:03:10.905241 containerd[1510]: time="2025-09-09T00:03:10.905167572Z" level=info msg="StopPodSandbox for \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\" returns successfully" Sep 9 00:03:10.905868 containerd[1510]: time="2025-09-09T00:03:10.905845245Z" level=info msg="RemovePodSandbox for \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\"" Sep 9 00:03:10.905965 containerd[1510]: time="2025-09-09T00:03:10.905946320Z" level=info msg="Forcibly stopping sandbox \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\"" Sep 9 00:03:10.906151 containerd[1510]: time="2025-09-09T00:03:10.906110926Z" level=info msg="TearDown network for sandbox \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\" successfully" Sep 9 00:03:10.914751 containerd[1510]: time="2025-09-09T00:03:10.914695899Z" level=info msg="StartContainer for \"cc585c93d4795cf2632671e53662a20b073bb352cb7622ed8a514dd4c7962bdf\" returns successfully" Sep 9 00:03:10.915839 containerd[1510]: time="2025-09-09T00:03:10.915814840Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.916095 containerd[1510]: time="2025-09-09T00:03:10.915986270Z" level=info msg="RemovePodSandbox \"b54075ee9726224285d47148c426f8a2bc8df80c1bb0a189c27cbdc194d2e164\" returns successfully" Sep 9 00:03:10.916694 containerd[1510]: time="2025-09-09T00:03:10.916669745Z" level=info msg="StopPodSandbox for \"8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e\"" Sep 9 00:03:10.916869 containerd[1510]: time="2025-09-09T00:03:10.916841365Z" level=info msg="TearDown network for sandbox \"8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e\" successfully" Sep 9 00:03:10.916965 containerd[1510]: time="2025-09-09T00:03:10.916902903Z" level=info msg="StopPodSandbox for \"8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e\" returns successfully" Sep 9 00:03:10.917455 containerd[1510]: time="2025-09-09T00:03:10.917431139Z" level=info msg="RemovePodSandbox for \"8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e\"" Sep 9 00:03:10.917614 containerd[1510]: time="2025-09-09T00:03:10.917589113Z" level=info msg="Forcibly stopping sandbox \"8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e\"" Sep 9 00:03:10.917800 containerd[1510]: time="2025-09-09T00:03:10.917749862Z" level=info msg="TearDown network for sandbox \"8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e\" successfully" Sep 9 00:03:10.924890 containerd[1510]: time="2025-09-09T00:03:10.924763021Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.924890 containerd[1510]: time="2025-09-09T00:03:10.924870277Z" level=info msg="RemovePodSandbox \"8789a649583dbe4a105b5ff0234d394522594490ecbe56c5fadb7a0abc7ef76e\" returns successfully" Sep 9 00:03:10.925549 containerd[1510]: time="2025-09-09T00:03:10.925356022Z" level=info msg="StopPodSandbox for \"ec355016b3c360e1428dea342fb572f745b76d8697ce7bbb5384187a1a30294e\"" Sep 9 00:03:10.925549 containerd[1510]: time="2025-09-09T00:03:10.925465011Z" level=info msg="TearDown network for sandbox \"ec355016b3c360e1428dea342fb572f745b76d8697ce7bbb5384187a1a30294e\" successfully" Sep 9 00:03:10.925549 containerd[1510]: time="2025-09-09T00:03:10.925479138Z" level=info msg="StopPodSandbox for \"ec355016b3c360e1428dea342fb572f745b76d8697ce7bbb5384187a1a30294e\" returns successfully" Sep 9 00:03:10.928051 containerd[1510]: time="2025-09-09T00:03:10.925961776Z" level=info msg="RemovePodSandbox for \"ec355016b3c360e1428dea342fb572f745b76d8697ce7bbb5384187a1a30294e\"" Sep 9 00:03:10.928051 containerd[1510]: time="2025-09-09T00:03:10.925986324Z" level=info msg="Forcibly stopping sandbox \"ec355016b3c360e1428dea342fb572f745b76d8697ce7bbb5384187a1a30294e\"" Sep 9 00:03:10.928051 containerd[1510]: time="2025-09-09T00:03:10.926082689Z" level=info msg="TearDown network for sandbox \"ec355016b3c360e1428dea342fb572f745b76d8697ce7bbb5384187a1a30294e\" successfully" Sep 9 00:03:10.945655 containerd[1510]: time="2025-09-09T00:03:10.945555042Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec355016b3c360e1428dea342fb572f745b76d8697ce7bbb5384187a1a30294e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:10.945803 containerd[1510]: time="2025-09-09T00:03:10.945716472Z" level=info msg="RemovePodSandbox \"ec355016b3c360e1428dea342fb572f745b76d8697ce7bbb5384187a1a30294e\" returns successfully" Sep 9 00:03:10.946473 containerd[1510]: time="2025-09-09T00:03:10.946375470Z" level=info msg="StopPodSandbox for \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\"" Sep 9 00:03:10.946579 containerd[1510]: time="2025-09-09T00:03:10.946559654Z" level=info msg="TearDown network for sandbox \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\" successfully" Sep 9 00:03:10.946579 containerd[1510]: time="2025-09-09T00:03:10.946574493Z" level=info msg="StopPodSandbox for \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\" returns successfully" Sep 9 00:03:10.946873 containerd[1510]: time="2025-09-09T00:03:10.946850223Z" level=info msg="RemovePodSandbox for \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\"" Sep 9 00:03:10.946928 containerd[1510]: time="2025-09-09T00:03:10.946871865Z" level=info msg="Forcibly stopping sandbox \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\"" Sep 9 00:03:10.947158 containerd[1510]: time="2025-09-09T00:03:10.947096076Z" level=info msg="TearDown network for sandbox \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\" successfully" Sep 9 00:03:10.983660 containerd[1510]: time="2025-09-09T00:03:10.983534982Z" level=info msg="StopContainer for \"cc585c93d4795cf2632671e53662a20b073bb352cb7622ed8a514dd4c7962bdf\" with timeout 30 (s)" Sep 9 00:03:10.984048 containerd[1510]: time="2025-09-09T00:03:10.983937606Z" level=info msg="StopContainer for \"1923a36a608245cfeb22c83506796eeecce07b3cd5c79c0cd4726c9a900ca8e0\" with timeout 30 (s)" Sep 9 00:03:10.984048 containerd[1510]: time="2025-09-09T00:03:10.984021117Z" level=info msg="Stop container \"cc585c93d4795cf2632671e53662a20b073bb352cb7622ed8a514dd4c7962bdf\" with signal terminated" Sep 9 00:03:10.989924 containerd[1510]: time="2025-09-09T00:03:10.989818227Z" level=info msg="Stop container \"1923a36a608245cfeb22c83506796eeecce07b3cd5c79c0cd4726c9a900ca8e0\" with signal terminated" Sep 9 00:03:10.997949 systemd[1]: cri-containerd-cc585c93d4795cf2632671e53662a20b073bb352cb7622ed8a514dd4c7962bdf.scope: Deactivated successfully. Sep 9 00:03:11.018295 systemd[1]: cri-containerd-1923a36a608245cfeb22c83506796eeecce07b3cd5c79c0cd4726c9a900ca8e0.scope: Deactivated successfully. Sep 9 00:03:11.018792 systemd[1]: cri-containerd-1923a36a608245cfeb22c83506796eeecce07b3cd5c79c0cd4726c9a900ca8e0.scope: Consumed 55ms CPU time, 7.5M memory peak, 2.3M read from disk, 12K written to disk. Sep 9 00:03:11.064460 containerd[1510]: time="2025-09-09T00:03:11.037352997Z" level=info msg="shim disconnected" id=1923a36a608245cfeb22c83506796eeecce07b3cd5c79c0cd4726c9a900ca8e0 namespace=k8s.io Sep 9 00:03:11.064460 containerd[1510]: time="2025-09-09T00:03:11.037409136Z" level=warning msg="cleaning up after shim disconnected" id=1923a36a608245cfeb22c83506796eeecce07b3cd5c79c0cd4726c9a900ca8e0 namespace=k8s.io Sep 9 00:03:11.064460 containerd[1510]: time="2025-09-09T00:03:11.037417351Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:03:11.067832 kubelet[2630]: I0909 00:03:11.067754 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-b854f49bb-nlfqw" podStartSLOduration=21.677718149 podStartE2EDuration="41.067733532s" podCreationTimestamp="2025-09-09 00:02:30 +0000 UTC" firstStartedPulling="2025-09-09 00:02:51.075526122 +0000 UTC m=+41.743096943" lastFinishedPulling="2025-09-09 00:03:10.465541484 +0000 UTC m=+61.133112326" observedRunningTime="2025-09-09 00:03:11.067360906 +0000 UTC m=+61.734931747" watchObservedRunningTime="2025-09-09 00:03:11.067733532 +0000 UTC m=+61.735304363" Sep 9 00:03:11.072105 kubelet[2630]: I0909 00:03:11.068186 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-ct9bv" podStartSLOduration=31.92449137 podStartE2EDuration="46.068178848s" podCreationTimestamp="2025-09-09 00:02:25 +0000 UTC" firstStartedPulling="2025-09-09 00:02:51.762996248 +0000 UTC m=+42.430567069" lastFinishedPulling="2025-09-09 00:03:05.906683716 +0000 UTC m=+56.574254547" observedRunningTime="2025-09-09 00:03:07.289449118 +0000 UTC m=+57.957019940" watchObservedRunningTime="2025-09-09 00:03:11.068178848 +0000 UTC m=+61.735749689" Sep 9 00:03:11.075378 containerd[1510]: time="2025-09-09T00:03:11.075325314Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:11.075537 containerd[1510]: time="2025-09-09T00:03:11.075438331Z" level=info msg="RemovePodSandbox \"2be2fcb099b278d72dfb09985226e71ad74f5bcd9c3e1cf49b2a8230a2e9ee66\" returns successfully" Sep 9 00:03:11.076135 containerd[1510]: time="2025-09-09T00:03:11.076054786Z" level=info msg="StopPodSandbox for \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\"" Sep 9 00:03:11.076429 containerd[1510]: time="2025-09-09T00:03:11.076193102Z" level=info msg="TearDown network for sandbox \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\" successfully" Sep 9 00:03:11.076429 containerd[1510]: time="2025-09-09T00:03:11.076254270Z" level=info msg="StopPodSandbox for \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\" returns successfully" Sep 9 00:03:11.076618 containerd[1510]: time="2025-09-09T00:03:11.076594023Z" level=info msg="RemovePodSandbox for \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\"" Sep 9 00:03:11.076618 containerd[1510]: time="2025-09-09T00:03:11.076616496Z" level=info msg="Forcibly stopping sandbox \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\"" Sep 9 00:03:11.103322 containerd[1510]: time="2025-09-09T00:03:11.076678135Z" level=info msg="TearDown network for sandbox \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\" successfully" Sep 9 00:03:11.129647 containerd[1510]: time="2025-09-09T00:03:11.129592511Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:11.129788 containerd[1510]: time="2025-09-09T00:03:11.129681272Z" level=info msg="RemovePodSandbox \"a28852c4a1ce57030d7a5fba9c0435332cda660c9ee9e6d6a53bc652f57d451a\" returns successfully" Sep 9 00:03:11.135279 containerd[1510]: time="2025-09-09T00:03:11.135249765Z" level=info msg="StopPodSandbox for \"db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535\"" Sep 9 00:03:11.135367 containerd[1510]: time="2025-09-09T00:03:11.135350388Z" level=info msg="TearDown network for sandbox \"db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535\" successfully" Sep 9 00:03:11.135367 containerd[1510]: time="2025-09-09T00:03:11.135360748Z" level=info msg="StopPodSandbox for \"db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535\" returns successfully" Sep 9 00:03:11.135509 containerd[1510]: time="2025-09-09T00:03:11.135447645Z" level=info msg="shim disconnected" id=cc585c93d4795cf2632671e53662a20b073bb352cb7622ed8a514dd4c7962bdf namespace=k8s.io Sep 9 00:03:11.135509 containerd[1510]: time="2025-09-09T00:03:11.135477081Z" level=warning msg="cleaning up after shim disconnected" id=cc585c93d4795cf2632671e53662a20b073bb352cb7622ed8a514dd4c7962bdf namespace=k8s.io Sep 9 00:03:11.135509 containerd[1510]: time="2025-09-09T00:03:11.135484146Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:03:11.135839 containerd[1510]: time="2025-09-09T00:03:11.135807427Z" level=info msg="RemovePodSandbox for \"db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535\"" Sep 9 00:03:11.135914 containerd[1510]: time="2025-09-09T00:03:11.135844928Z" level=info msg="Forcibly stopping sandbox \"db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535\"" Sep 9 00:03:11.135992 containerd[1510]: time="2025-09-09T00:03:11.135945141Z" level=info msg="TearDown network for sandbox \"db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535\" successfully" Sep 9 00:03:11.140642 containerd[1510]: time="2025-09-09T00:03:11.140602203Z" level=info msg="StopContainer for \"1923a36a608245cfeb22c83506796eeecce07b3cd5c79c0cd4726c9a900ca8e0\" returns successfully" Sep 9 00:03:11.142223 containerd[1510]: time="2025-09-09T00:03:11.142199423Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:11.142223 containerd[1510]: time="2025-09-09T00:03:11.142240131Z" level=info msg="RemovePodSandbox \"db99add3fe50c70ef7021f0ed2edabcbe38d208c326bf9c017d2df5e9e04f535\" returns successfully" Sep 9 00:03:11.142713 containerd[1510]: time="2025-09-09T00:03:11.142531781Z" level=info msg="StopPodSandbox for \"145dab1f893b933faa3ae14b4a921dc0906fa51647d6f4af94bd7949861127bb\"" Sep 9 00:03:11.142713 containerd[1510]: time="2025-09-09T00:03:11.142645280Z" level=info msg="TearDown network for sandbox \"145dab1f893b933faa3ae14b4a921dc0906fa51647d6f4af94bd7949861127bb\" successfully" Sep 9 00:03:11.142713 containerd[1510]: time="2025-09-09T00:03:11.142658375Z" level=info msg="StopPodSandbox for \"145dab1f893b933faa3ae14b4a921dc0906fa51647d6f4af94bd7949861127bb\" returns successfully" Sep 9 00:03:11.143281 containerd[1510]: time="2025-09-09T00:03:11.143093982Z" level=info msg="RemovePodSandbox for \"145dab1f893b933faa3ae14b4a921dc0906fa51647d6f4af94bd7949861127bb\"" Sep 9 00:03:11.143281 containerd[1510]: time="2025-09-09T00:03:11.143119281Z" level=info msg="Forcibly stopping sandbox \"145dab1f893b933faa3ae14b4a921dc0906fa51647d6f4af94bd7949861127bb\"" Sep 9 00:03:11.143281 containerd[1510]: time="2025-09-09T00:03:11.143195237Z" level=info msg="TearDown network for sandbox \"145dab1f893b933faa3ae14b4a921dc0906fa51647d6f4af94bd7949861127bb\" successfully" Sep 9 00:03:11.148472 containerd[1510]: time="2025-09-09T00:03:11.148415661Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"145dab1f893b933faa3ae14b4a921dc0906fa51647d6f4af94bd7949861127bb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:11.148716 containerd[1510]: time="2025-09-09T00:03:11.148698705Z" level=info msg="RemovePodSandbox \"145dab1f893b933faa3ae14b4a921dc0906fa51647d6f4af94bd7949861127bb\" returns successfully" Sep 9 00:03:11.149164 containerd[1510]: time="2025-09-09T00:03:11.149144092Z" level=info msg="StopPodSandbox for \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\"" Sep 9 00:03:11.149393 containerd[1510]: time="2025-09-09T00:03:11.149373392Z" level=info msg="TearDown network for sandbox \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\" successfully" Sep 9 00:03:11.149480 containerd[1510]: time="2025-09-09T00:03:11.149463515Z" level=info msg="StopPodSandbox for \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\" returns successfully" Sep 9 00:03:11.149801 containerd[1510]: time="2025-09-09T00:03:11.149781516Z" level=info msg="RemovePodSandbox for \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\"" Sep 9 00:03:11.150096 containerd[1510]: time="2025-09-09T00:03:11.150064340Z" level=info msg="Forcibly stopping sandbox \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\"" Sep 9 00:03:11.150200 containerd[1510]: time="2025-09-09T00:03:11.150158191Z" level=info msg="TearDown network for sandbox \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\" successfully" Sep 9 00:03:11.158787 containerd[1510]: time="2025-09-09T00:03:11.158759242Z" level=info msg="StopContainer for \"cc585c93d4795cf2632671e53662a20b073bb352cb7622ed8a514dd4c7962bdf\" returns successfully" Sep 9 00:03:11.159141 containerd[1510]: time="2025-09-09T00:03:11.159114515Z" level=info msg="StopPodSandbox for \"fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660\"" Sep 9 00:03:11.162849 containerd[1510]: time="2025-09-09T00:03:11.162825839Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:11.162937 containerd[1510]: time="2025-09-09T00:03:11.162882578Z" level=info msg="RemovePodSandbox \"ddc2e15746d39264afc49041ec63c805794c8202b3e684f1e0523e9a5394b1fb\" returns successfully" Sep 9 00:03:11.163178 containerd[1510]: time="2025-09-09T00:03:11.163150202Z" level=info msg="StopPodSandbox for \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\"" Sep 9 00:03:11.163280 containerd[1510]: time="2025-09-09T00:03:11.163263921Z" level=info msg="TearDown network for sandbox \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\" successfully" Sep 9 00:03:11.163311 containerd[1510]: time="2025-09-09T00:03:11.163278579Z" level=info msg="StopPodSandbox for \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\" returns successfully" Sep 9 00:03:11.163556 containerd[1510]: time="2025-09-09T00:03:11.163532246Z" level=info msg="RemovePodSandbox for \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\"" Sep 9 00:03:11.163626 containerd[1510]: time="2025-09-09T00:03:11.163559098Z" level=info msg="Forcibly stopping sandbox \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\"" Sep 9 00:03:11.163683 containerd[1510]: time="2025-09-09T00:03:11.163644222Z" level=info msg="TearDown network for sandbox \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\" successfully" Sep 9 00:03:11.168242 containerd[1510]: time="2025-09-09T00:03:11.168213815Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:11.168329 containerd[1510]: time="2025-09-09T00:03:11.168262789Z" level=info msg="RemovePodSandbox \"2a4e7796ab54ad519f32a12377ec9f729b350c333a42f3f652c6bcd493f7eb41\" returns successfully" Sep 9 00:03:11.168567 containerd[1510]: time="2025-09-09T00:03:11.168537097Z" level=info msg="StopPodSandbox for \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\"" Sep 9 00:03:11.168660 containerd[1510]: time="2025-09-09T00:03:11.168642389Z" level=info msg="TearDown network for sandbox \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\" successfully" Sep 9 00:03:11.168734 containerd[1510]: time="2025-09-09T00:03:11.168658990Z" level=info msg="StopPodSandbox for \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\" returns successfully" Sep 9 00:03:11.168867 containerd[1510]: time="2025-09-09T00:03:11.168837755Z" level=info msg="RemovePodSandbox for \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\"" Sep 9 00:03:11.168915 containerd[1510]: time="2025-09-09T00:03:11.168864155Z" level=info msg="Forcibly stopping sandbox \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\"" Sep 9 00:03:11.169049 containerd[1510]: time="2025-09-09T00:03:11.168996600Z" level=info msg="TearDown network for sandbox \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\" successfully" Sep 9 00:03:11.169550 containerd[1510]: time="2025-09-09T00:03:11.159147368Z" level=info msg="Container to stop \"1923a36a608245cfeb22c83506796eeecce07b3cd5c79c0cd4726c9a900ca8e0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:03:11.169550 containerd[1510]: time="2025-09-09T00:03:11.169545004Z" level=info msg="Container to stop \"cc585c93d4795cf2632671e53662a20b073bb352cb7622ed8a514dd4c7962bdf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:03:11.173633 containerd[1510]: time="2025-09-09T00:03:11.173599216Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:11.173728 containerd[1510]: time="2025-09-09T00:03:11.173657018Z" level=info msg="RemovePodSandbox \"55a96784da382f6a162c971edb5378239e5ebd3ed6c4e2dfea3a1af22dbd29d2\" returns successfully" Sep 9 00:03:11.174092 containerd[1510]: time="2025-09-09T00:03:11.174066785Z" level=info msg="StopPodSandbox for \"c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897\"" Sep 9 00:03:11.174216 containerd[1510]: time="2025-09-09T00:03:11.174193369Z" level=info msg="TearDown network for sandbox \"c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897\" successfully" Sep 9 00:03:11.174216 containerd[1510]: time="2025-09-09T00:03:11.174211383Z" level=info msg="StopPodSandbox for \"c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897\" returns successfully" Sep 9 00:03:11.174520 containerd[1510]: time="2025-09-09T00:03:11.174493947Z" level=info msg="RemovePodSandbox for \"c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897\"" Sep 9 00:03:11.174573 containerd[1510]: time="2025-09-09T00:03:11.174532810Z" level=info msg="Forcibly stopping sandbox \"c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897\"" Sep 9 00:03:11.174656 containerd[1510]: time="2025-09-09T00:03:11.174612885Z" level=info msg="TearDown network for sandbox \"c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897\" successfully" Sep 9 00:03:11.176679 systemd[1]: cri-containerd-fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660.scope: Deactivated successfully. Sep 9 00:03:11.193023 containerd[1510]: time="2025-09-09T00:03:11.192743213Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:11.193023 containerd[1510]: time="2025-09-09T00:03:11.192870628Z" level=info msg="RemovePodSandbox \"c6e37ff4243b79d1b2b73b8dc0bc261e4973e8a1ca6d759310c7461d28315897\" returns successfully" Sep 9 00:03:11.193808 containerd[1510]: time="2025-09-09T00:03:11.193628505Z" level=info msg="StopPodSandbox for \"5aae1a11cfbd3f9111d611402aac77bf6a6a6bbdbefaacebc23cbe1b8c85f156\"" Sep 9 00:03:11.193808 containerd[1510]: time="2025-09-09T00:03:11.193743325Z" level=info msg="TearDown network for sandbox \"5aae1a11cfbd3f9111d611402aac77bf6a6a6bbdbefaacebc23cbe1b8c85f156\" successfully" Sep 9 00:03:11.193808 containerd[1510]: time="2025-09-09T00:03:11.193757453Z" level=info msg="StopPodSandbox for \"5aae1a11cfbd3f9111d611402aac77bf6a6a6bbdbefaacebc23cbe1b8c85f156\" returns successfully" Sep 9 00:03:11.194302 containerd[1510]: time="2025-09-09T00:03:11.194220813Z" level=info msg="RemovePodSandbox for \"5aae1a11cfbd3f9111d611402aac77bf6a6a6bbdbefaacebc23cbe1b8c85f156\"" Sep 9 00:03:11.194302 containerd[1510]: time="2025-09-09T00:03:11.194247745Z" level=info msg="Forcibly stopping sandbox \"5aae1a11cfbd3f9111d611402aac77bf6a6a6bbdbefaacebc23cbe1b8c85f156\"" Sep 9 00:03:11.194390 containerd[1510]: time="2025-09-09T00:03:11.194330935Z" level=info msg="TearDown network for sandbox \"5aae1a11cfbd3f9111d611402aac77bf6a6a6bbdbefaacebc23cbe1b8c85f156\" successfully" Sep 9 00:03:11.199178 containerd[1510]: time="2025-09-09T00:03:11.199128967Z" level=info msg="shim disconnected" id=fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660 namespace=k8s.io Sep 9 00:03:11.199178 containerd[1510]: time="2025-09-09T00:03:11.199176529Z" level=warning msg="cleaning up after shim disconnected" id=fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660 namespace=k8s.io Sep 9 00:03:11.199563 containerd[1510]: time="2025-09-09T00:03:11.199186798Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:03:11.200681 containerd[1510]: time="2025-09-09T00:03:11.200628379Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5aae1a11cfbd3f9111d611402aac77bf6a6a6bbdbefaacebc23cbe1b8c85f156\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:03:11.200915 containerd[1510]: time="2025-09-09T00:03:11.200698063Z" level=info msg="RemovePodSandbox \"5aae1a11cfbd3f9111d611402aac77bf6a6a6bbdbefaacebc23cbe1b8c85f156\" returns successfully" Sep 9 00:03:11.793331 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc585c93d4795cf2632671e53662a20b073bb352cb7622ed8a514dd4c7962bdf-rootfs.mount: Deactivated successfully. Sep 9 00:03:11.793469 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1923a36a608245cfeb22c83506796eeecce07b3cd5c79c0cd4726c9a900ca8e0-rootfs.mount: Deactivated successfully. Sep 9 00:03:11.793570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660-rootfs.mount: Deactivated successfully. Sep 9 00:03:11.793675 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660-shm.mount: Deactivated successfully. Sep 9 00:03:11.989291 kubelet[2630]: I0909 00:03:11.989247 2630 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" Sep 9 00:03:12.232660 systemd-networkd[1423]: calib0f4fc0d6e6: Link DOWN Sep 9 00:03:12.232670 systemd-networkd[1423]: calib0f4fc0d6e6: Lost carrier Sep 9 00:03:12.311888 systemd[1]: Started sshd@11-10.0.0.133:22-10.0.0.1:51408.service - OpenSSH per-connection server daemon (10.0.0.1:51408). Sep 9 00:03:12.366358 sshd[6180]: Accepted publickey for core from 10.0.0.1 port 51408 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:03:12.367965 sshd-session[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:03:12.372323 systemd-logind[1493]: New session 12 of user core. Sep 9 00:03:12.384168 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:03:12.777585 containerd[1510]: 2025-09-09 00:03:12.229 [INFO][6154] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" Sep 9 00:03:12.777585 containerd[1510]: 2025-09-09 00:03:12.231 [INFO][6154] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" iface="eth0" netns="/var/run/netns/cni-18e8bd9b-cc67-5a42-06be-ffbca02d634a" Sep 9 00:03:12.777585 containerd[1510]: 2025-09-09 00:03:12.231 [INFO][6154] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" iface="eth0" netns="/var/run/netns/cni-18e8bd9b-cc67-5a42-06be-ffbca02d634a" Sep 9 00:03:12.777585 containerd[1510]: 2025-09-09 00:03:12.239 [INFO][6154] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" after=8.118044ms iface="eth0" netns="/var/run/netns/cni-18e8bd9b-cc67-5a42-06be-ffbca02d634a" Sep 9 00:03:12.777585 containerd[1510]: 2025-09-09 00:03:12.239 [INFO][6154] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" Sep 9 00:03:12.777585 containerd[1510]: 2025-09-09 00:03:12.239 [INFO][6154] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" Sep 9 00:03:12.777585 containerd[1510]: 2025-09-09 00:03:12.264 [INFO][6167] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" HandleID="k8s-pod-network.fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" Workload="localhost-k8s-whisker--b854f49bb--nlfqw-eth0" Sep 9 00:03:12.777585 containerd[1510]: 2025-09-09 00:03:12.264 [INFO][6167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:03:12.777585 containerd[1510]: 2025-09-09 00:03:12.264 [INFO][6167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:03:12.777585 containerd[1510]: 2025-09-09 00:03:12.769 [INFO][6167] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" HandleID="k8s-pod-network.fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" Workload="localhost-k8s-whisker--b854f49bb--nlfqw-eth0" Sep 9 00:03:12.777585 containerd[1510]: 2025-09-09 00:03:12.769 [INFO][6167] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" HandleID="k8s-pod-network.fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" Workload="localhost-k8s-whisker--b854f49bb--nlfqw-eth0" Sep 9 00:03:12.777585 containerd[1510]: 2025-09-09 00:03:12.771 [INFO][6167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:03:12.777585 containerd[1510]: 2025-09-09 00:03:12.774 [INFO][6154] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660" Sep 9 00:03:12.778832 containerd[1510]: time="2025-09-09T00:03:12.777824047Z" level=info msg="TearDown network for sandbox \"fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660\" successfully" Sep 9 00:03:12.778832 containerd[1510]: time="2025-09-09T00:03:12.777851860Z" level=info msg="StopPodSandbox for \"fa845f889f0005a16ac3ce75c2ba9f63e424c935c90fc2e827016c3210b4f660\" returns successfully" Sep 9 00:03:12.781446 systemd[1]: run-netns-cni\x2d18e8bd9b\x2dcc67\x2d5a42\x2d06be\x2dffbca02d634a.mount: Deactivated successfully. Sep 9 00:03:12.859054 kubelet[2630]: I0909 00:03:12.858995 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwlcs\" (UniqueName: \"kubernetes.io/projected/fe12ff7b-73e6-42d0-a348-29ad8070fac9-kube-api-access-kwlcs\") pod \"fe12ff7b-73e6-42d0-a348-29ad8070fac9\" (UID: \"fe12ff7b-73e6-42d0-a348-29ad8070fac9\") " Sep 9 00:03:12.866234 kubelet[2630]: I0909 00:03:12.859072 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe12ff7b-73e6-42d0-a348-29ad8070fac9-whisker-ca-bundle\") pod \"fe12ff7b-73e6-42d0-a348-29ad8070fac9\" (UID: \"fe12ff7b-73e6-42d0-a348-29ad8070fac9\") " Sep 9 00:03:12.866234 kubelet[2630]: I0909 00:03:12.859091 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fe12ff7b-73e6-42d0-a348-29ad8070fac9-whisker-backend-key-pair\") pod \"fe12ff7b-73e6-42d0-a348-29ad8070fac9\" (UID: \"fe12ff7b-73e6-42d0-a348-29ad8070fac9\") " Sep 9 00:03:12.873832 systemd[1]: var-lib-kubelet-pods-fe12ff7b\x2d73e6\x2d42d0\x2da348\x2d29ad8070fac9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkwlcs.mount: Deactivated successfully. Sep 9 00:03:12.873977 systemd[1]: var-lib-kubelet-pods-fe12ff7b\x2d73e6\x2d42d0\x2da348\x2d29ad8070fac9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 00:03:12.881550 kubelet[2630]: I0909 00:03:12.881471 2630 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe12ff7b-73e6-42d0-a348-29ad8070fac9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fe12ff7b-73e6-42d0-a348-29ad8070fac9" (UID: "fe12ff7b-73e6-42d0-a348-29ad8070fac9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:03:12.881812 kubelet[2630]: I0909 00:03:12.881787 2630 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe12ff7b-73e6-42d0-a348-29ad8070fac9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fe12ff7b-73e6-42d0-a348-29ad8070fac9" (UID: "fe12ff7b-73e6-42d0-a348-29ad8070fac9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:03:12.882274 kubelet[2630]: I0909 00:03:12.882188 2630 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe12ff7b-73e6-42d0-a348-29ad8070fac9-kube-api-access-kwlcs" (OuterVolumeSpecName: "kube-api-access-kwlcs") pod "fe12ff7b-73e6-42d0-a348-29ad8070fac9" (UID: "fe12ff7b-73e6-42d0-a348-29ad8070fac9"). InnerVolumeSpecName "kube-api-access-kwlcs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:03:12.960084 kubelet[2630]: I0909 00:03:12.959997 2630 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kwlcs\" (UniqueName: \"kubernetes.io/projected/fe12ff7b-73e6-42d0-a348-29ad8070fac9-kube-api-access-kwlcs\") on node \"localhost\" DevicePath \"\"" Sep 9 00:03:12.960084 kubelet[2630]: I0909 00:03:12.960074 2630 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe12ff7b-73e6-42d0-a348-29ad8070fac9-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 9 00:03:12.960084 kubelet[2630]: I0909 00:03:12.960088 2630 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fe12ff7b-73e6-42d0-a348-29ad8070fac9-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 9 00:03:13.002675 systemd[1]: Removed slice kubepods-besteffort-podfe12ff7b_73e6_42d0_a348_29ad8070fac9.slice - libcontainer container kubepods-besteffort-podfe12ff7b_73e6_42d0_a348_29ad8070fac9.slice. Sep 9 00:03:13.003006 systemd[1]: kubepods-besteffort-podfe12ff7b_73e6_42d0_a348_29ad8070fac9.slice: Consumed 175ms CPU time, 19.8M memory peak, 2.3M read from disk, 12K written to disk. Sep 9 00:03:13.077024 sshd[6183]: Connection closed by 10.0.0.1 port 51408 Sep 9 00:03:13.077313 sshd-session[6180]: pam_unix(sshd:session): session closed for user core Sep 9 00:03:13.090170 systemd[1]: sshd@11-10.0.0.133:22-10.0.0.1:51408.service: Deactivated successfully. Sep 9 00:03:13.092260 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:03:13.093902 systemd-logind[1493]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:03:13.100296 systemd[1]: Started sshd@12-10.0.0.133:22-10.0.0.1:51424.service - OpenSSH per-connection server daemon (10.0.0.1:51424). Sep 9 00:03:13.101508 systemd-logind[1493]: Removed session 12. Sep 9 00:03:13.192094 sshd[6205]: Accepted publickey for core from 10.0.0.1 port 51424 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:03:13.193528 sshd-session[6205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:03:13.197728 systemd-logind[1493]: New session 13 of user core. Sep 9 00:03:13.207147 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:03:13.566482 sshd[6210]: Connection closed by 10.0.0.1 port 51424 Sep 9 00:03:13.567241 sshd-session[6205]: pam_unix(sshd:session): session closed for user core Sep 9 00:03:13.579261 systemd[1]: sshd@12-10.0.0.133:22-10.0.0.1:51424.service: Deactivated successfully. Sep 9 00:03:13.583702 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:03:13.588243 systemd-logind[1493]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:03:13.595413 systemd[1]: Started sshd@13-10.0.0.133:22-10.0.0.1:51426.service - OpenSSH per-connection server daemon (10.0.0.1:51426). Sep 9 00:03:13.596872 systemd-logind[1493]: Removed session 13. Sep 9 00:03:13.638733 sshd[6220]: Accepted publickey for core from 10.0.0.1 port 51426 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:03:13.640653 sshd-session[6220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:03:13.645220 systemd-logind[1493]: New session 14 of user core. Sep 9 00:03:13.656245 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:03:13.883542 sshd[6223]: Connection closed by 10.0.0.1 port 51426 Sep 9 00:03:13.884176 sshd-session[6220]: pam_unix(sshd:session): session closed for user core Sep 9 00:03:13.888624 systemd[1]: sshd@13-10.0.0.133:22-10.0.0.1:51426.service: Deactivated successfully. Sep 9 00:03:13.892891 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:03:13.894163 systemd-logind[1493]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:03:13.895446 systemd-logind[1493]: Removed session 14. Sep 9 00:03:14.031479 containerd[1510]: time="2025-09-09T00:03:14.031422687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:14.032250 containerd[1510]: time="2025-09-09T00:03:14.032199749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 9 00:03:14.033330 containerd[1510]: time="2025-09-09T00:03:14.033295611Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:14.036737 containerd[1510]: time="2025-09-09T00:03:14.036680475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:03:14.037499 containerd[1510]: time="2025-09-09T00:03:14.037437146Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.571536271s" Sep 9 00:03:14.037499 containerd[1510]: time="2025-09-09T00:03:14.037474619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 9 00:03:14.039695 containerd[1510]: time="2025-09-09T00:03:14.039670732Z" level=info msg="CreateContainer within sandbox \"d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 00:03:14.055124 containerd[1510]: time="2025-09-09T00:03:14.054918645Z" level=info msg="CreateContainer within sandbox \"d351c433b6970e2dfe7b2e3fa454eadd0eb06d942301b56a70ba247f46dc832c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bee27914f5384352523b29b910cc05a4e1f4d69266b148b4ac7529e8f9acaecf\"" Sep 9 00:03:14.055822 containerd[1510]: time="2025-09-09T00:03:14.055782322Z" level=info msg="StartContainer for \"bee27914f5384352523b29b910cc05a4e1f4d69266b148b4ac7529e8f9acaecf\"" Sep 9 00:03:14.091211 systemd[1]: Started cri-containerd-bee27914f5384352523b29b910cc05a4e1f4d69266b148b4ac7529e8f9acaecf.scope - libcontainer container bee27914f5384352523b29b910cc05a4e1f4d69266b148b4ac7529e8f9acaecf. Sep 9 00:03:14.125308 containerd[1510]: time="2025-09-09T00:03:14.125266367Z" level=info msg="StartContainer for \"bee27914f5384352523b29b910cc05a4e1f4d69266b148b4ac7529e8f9acaecf\" returns successfully" Sep 9 00:03:14.592920 kubelet[2630]: I0909 00:03:14.592864 2630 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 00:03:14.592920 kubelet[2630]: I0909 00:03:14.592910 2630 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 00:03:15.429316 kubelet[2630]: I0909 00:03:15.429278 2630 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe12ff7b-73e6-42d0-a348-29ad8070fac9" path="/var/lib/kubelet/pods/fe12ff7b-73e6-42d0-a348-29ad8070fac9/volumes" Sep 9 00:03:18.327577 kubelet[2630]: I0909 00:03:18.327510 2630 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:03:18.345665 kubelet[2630]: I0909 00:03:18.345363 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nbs8t" podStartSLOduration=29.48207395 podStartE2EDuration="52.345343835s" podCreationTimestamp="2025-09-09 00:02:26 +0000 UTC" firstStartedPulling="2025-09-09 00:02:51.174968639 +0000 UTC m=+41.842539450" lastFinishedPulling="2025-09-09 00:03:14.038238514 +0000 UTC m=+64.705809335" observedRunningTime="2025-09-09 00:03:15.111342974 +0000 UTC m=+65.778913805" watchObservedRunningTime="2025-09-09 00:03:18.345343835 +0000 UTC m=+69.012914666" Sep 9 00:03:18.901086 systemd[1]: Started sshd@14-10.0.0.133:22-10.0.0.1:51432.service - OpenSSH per-connection server daemon (10.0.0.1:51432). Sep 9 00:03:18.959104 sshd[6290]: Accepted publickey for core from 10.0.0.1 port 51432 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:03:18.960583 sshd-session[6290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:03:18.964607 systemd-logind[1493]: New session 15 of user core. Sep 9 00:03:18.973173 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:03:19.173708 sshd[6292]: Connection closed by 10.0.0.1 port 51432 Sep 9 00:03:19.174603 sshd-session[6290]: pam_unix(sshd:session): session closed for user core Sep 9 00:03:19.179410 systemd[1]: sshd@14-10.0.0.133:22-10.0.0.1:51432.service: Deactivated successfully. Sep 9 00:03:19.181610 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:03:19.182407 systemd-logind[1493]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:03:19.183375 systemd-logind[1493]: Removed session 15. Sep 9 00:03:19.892326 systemd[1]: run-containerd-runc-k8s.io-8bab6dc0b213c0786d498a7e3bb66dc0c45b64b8b28e4d9ab646ce062f1a6bf7-runc.B3Ca2S.mount: Deactivated successfully. Sep 9 00:03:24.191227 systemd[1]: Started sshd@15-10.0.0.133:22-10.0.0.1:38364.service - OpenSSH per-connection server daemon (10.0.0.1:38364). Sep 9 00:03:24.390307 sshd[6329]: Accepted publickey for core from 10.0.0.1 port 38364 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:03:24.392330 sshd-session[6329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:03:24.397531 systemd-logind[1493]: New session 16 of user core. Sep 9 00:03:24.403191 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:03:24.555650 sshd[6331]: Connection closed by 10.0.0.1 port 38364 Sep 9 00:03:24.557702 sshd-session[6329]: pam_unix(sshd:session): session closed for user core Sep 9 00:03:24.561312 systemd[1]: sshd@15-10.0.0.133:22-10.0.0.1:38364.service: Deactivated successfully. Sep 9 00:03:24.563950 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:03:24.566483 systemd-logind[1493]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:03:24.567620 systemd-logind[1493]: Removed session 16. Sep 9 00:03:25.422479 kubelet[2630]: E0909 00:03:25.422433 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:03:27.421952 kubelet[2630]: E0909 00:03:27.421911 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:03:29.573626 systemd[1]: Started sshd@16-10.0.0.133:22-10.0.0.1:38370.service - OpenSSH per-connection server daemon (10.0.0.1:38370). Sep 9 00:03:29.639874 sshd[6362]: Accepted publickey for core from 10.0.0.1 port 38370 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:03:29.642201 sshd-session[6362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:03:29.647808 systemd-logind[1493]: New session 17 of user core. Sep 9 00:03:29.659325 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:03:29.956939 sshd[6365]: Connection closed by 10.0.0.1 port 38370 Sep 9 00:03:29.957389 sshd-session[6362]: pam_unix(sshd:session): session closed for user core Sep 9 00:03:29.966111 systemd[1]: sshd@16-10.0.0.133:22-10.0.0.1:38370.service: Deactivated successfully. Sep 9 00:03:29.968049 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:03:29.969655 systemd-logind[1493]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:03:29.977387 systemd[1]: Started sshd@17-10.0.0.133:22-10.0.0.1:44794.service - OpenSSH per-connection server daemon (10.0.0.1:44794). Sep 9 00:03:29.978443 systemd-logind[1493]: Removed session 17. Sep 9 00:03:30.014292 sshd[6378]: Accepted publickey for core from 10.0.0.1 port 44794 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:03:30.016121 sshd-session[6378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:03:30.021993 systemd-logind[1493]: New session 18 of user core. Sep 9 00:03:30.028244 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:03:30.574435 sshd[6381]: Connection closed by 10.0.0.1 port 44794 Sep 9 00:03:30.575113 sshd-session[6378]: pam_unix(sshd:session): session closed for user core Sep 9 00:03:30.588618 systemd[1]: sshd@17-10.0.0.133:22-10.0.0.1:44794.service: Deactivated successfully. Sep 9 00:03:30.591324 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:03:30.593168 systemd-logind[1493]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:03:30.604872 systemd[1]: Started sshd@18-10.0.0.133:22-10.0.0.1:44796.service - OpenSSH per-connection server daemon (10.0.0.1:44796). Sep 9 00:03:30.606253 systemd-logind[1493]: Removed session 18. Sep 9 00:03:30.641600 sshd[6391]: Accepted publickey for core from 10.0.0.1 port 44796 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:03:30.644088 sshd-session[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:03:30.649582 systemd-logind[1493]: New session 19 of user core. Sep 9 00:03:30.655233 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 00:03:31.428612 sshd[6394]: Connection closed by 10.0.0.1 port 44796 Sep 9 00:03:31.429382 sshd-session[6391]: pam_unix(sshd:session): session closed for user core Sep 9 00:03:31.444745 systemd[1]: sshd@18-10.0.0.133:22-10.0.0.1:44796.service: Deactivated successfully. Sep 9 00:03:31.447727 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:03:31.448992 systemd-logind[1493]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:03:31.460555 systemd[1]: Started sshd@19-10.0.0.133:22-10.0.0.1:44800.service - OpenSSH per-connection server daemon (10.0.0.1:44800). Sep 9 00:03:31.462609 systemd-logind[1493]: Removed session 19. Sep 9 00:03:31.517095 sshd[6412]: Accepted publickey for core from 10.0.0.1 port 44800 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:03:31.518868 sshd-session[6412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:03:31.523602 systemd-logind[1493]: New session 20 of user core. Sep 9 00:03:31.533203 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 00:03:31.893679 sshd[6416]: Connection closed by 10.0.0.1 port 44800 Sep 9 00:03:31.894704 sshd-session[6412]: pam_unix(sshd:session): session closed for user core Sep 9 00:03:31.908114 systemd[1]: sshd@19-10.0.0.133:22-10.0.0.1:44800.service: Deactivated successfully. Sep 9 00:03:31.910892 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:03:31.914945 systemd-logind[1493]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:03:31.923699 systemd[1]: Started sshd@20-10.0.0.133:22-10.0.0.1:44816.service - OpenSSH per-connection server daemon (10.0.0.1:44816). Sep 9 00:03:31.926565 systemd-logind[1493]: Removed session 20. Sep 9 00:03:31.991170 sshd[6426]: Accepted publickey for core from 10.0.0.1 port 44816 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:03:31.993497 sshd-session[6426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:03:32.006673 systemd-logind[1493]: New session 21 of user core. Sep 9 00:03:32.016823 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 00:03:32.187881 sshd[6429]: Connection closed by 10.0.0.1 port 44816 Sep 9 00:03:32.188372 sshd-session[6426]: pam_unix(sshd:session): session closed for user core Sep 9 00:03:32.193827 systemd[1]: sshd@20-10.0.0.133:22-10.0.0.1:44816.service: Deactivated successfully. Sep 9 00:03:32.197201 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:03:32.198495 systemd-logind[1493]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:03:32.199981 systemd-logind[1493]: Removed session 21. Sep 9 00:03:36.422859 kubelet[2630]: E0909 00:03:36.422802 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:03:37.201841 systemd[1]: Started sshd@21-10.0.0.133:22-10.0.0.1:44826.service - OpenSSH per-connection server daemon (10.0.0.1:44826). Sep 9 00:03:37.253231 sshd[6493]: Accepted publickey for core from 10.0.0.1 port 44826 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:03:37.254742 sshd-session[6493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:03:37.259212 systemd-logind[1493]: New session 22 of user core. Sep 9 00:03:37.271196 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 00:03:37.405437 sshd[6495]: Connection closed by 10.0.0.1 port 44826 Sep 9 00:03:37.405812 sshd-session[6493]: pam_unix(sshd:session): session closed for user core Sep 9 00:03:37.409966 systemd[1]: sshd@21-10.0.0.133:22-10.0.0.1:44826.service: Deactivated successfully. Sep 9 00:03:37.412321 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:03:37.413209 systemd-logind[1493]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:03:37.414152 systemd-logind[1493]: Removed session 22. Sep 9 00:03:37.422477 kubelet[2630]: E0909 00:03:37.422393 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:03:42.418641 systemd[1]: Started sshd@22-10.0.0.133:22-10.0.0.1:50046.service - OpenSSH per-connection server daemon (10.0.0.1:50046). Sep 9 00:03:42.468519 sshd[6535]: Accepted publickey for core from 10.0.0.1 port 50046 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:03:42.470328 sshd-session[6535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:03:42.475130 systemd-logind[1493]: New session 23 of user core. Sep 9 00:03:42.484353 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 00:03:42.663281 sshd[6537]: Connection closed by 10.0.0.1 port 50046 Sep 9 00:03:42.663675 sshd-session[6535]: pam_unix(sshd:session): session closed for user core Sep 9 00:03:42.668148 systemd[1]: sshd@22-10.0.0.133:22-10.0.0.1:50046.service: Deactivated successfully. Sep 9 00:03:42.670607 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:03:42.671537 systemd-logind[1493]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:03:42.672617 systemd-logind[1493]: Removed session 23. Sep 9 00:03:47.677829 systemd[1]: Started sshd@23-10.0.0.133:22-10.0.0.1:50058.service - OpenSSH per-connection server daemon (10.0.0.1:50058). Sep 9 00:03:47.721722 sshd[6552]: Accepted publickey for core from 10.0.0.1 port 50058 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:03:47.723603 sshd-session[6552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:03:47.727926 systemd-logind[1493]: New session 24 of user core. Sep 9 00:03:47.735190 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 00:03:47.860992 sshd[6554]: Connection closed by 10.0.0.1 port 50058 Sep 9 00:03:47.861470 sshd-session[6552]: pam_unix(sshd:session): session closed for user core Sep 9 00:03:47.866675 systemd[1]: sshd@23-10.0.0.133:22-10.0.0.1:50058.service: Deactivated successfully. Sep 9 00:03:47.869154 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 00:03:47.870120 systemd-logind[1493]: Session 24 logged out. Waiting for processes to exit. Sep 9 00:03:47.870990 systemd-logind[1493]: Removed session 24. Sep 9 00:03:52.874238 systemd[1]: Started sshd@24-10.0.0.133:22-10.0.0.1:58666.service - OpenSSH per-connection server daemon (10.0.0.1:58666). Sep 9 00:03:52.920321 sshd[6592]: Accepted publickey for core from 10.0.0.1 port 58666 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 9 00:03:52.921911 sshd-session[6592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:03:52.926267 systemd-logind[1493]: New session 25 of user core. Sep 9 00:03:52.938155 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 00:03:53.069563 sshd[6594]: Connection closed by 10.0.0.1 port 58666 Sep 9 00:03:53.069935 sshd-session[6592]: pam_unix(sshd:session): session closed for user core Sep 9 00:03:53.074496 systemd[1]: sshd@24-10.0.0.133:22-10.0.0.1:58666.service: Deactivated successfully. Sep 9 00:03:53.076777 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 00:03:53.077528 systemd-logind[1493]: Session 25 logged out. Waiting for processes to exit. Sep 9 00:03:53.078457 systemd-logind[1493]: Removed session 25. Sep 9 00:03:54.421991 kubelet[2630]: E0909 00:03:54.421939 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"