Sep 4 23:46:21.115583 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:03:18 -00 2025 Sep 4 23:46:21.115614 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:46:21.115624 kernel: BIOS-provided physical RAM map: Sep 4 23:46:21.115631 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 4 23:46:21.115640 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 4 23:46:21.115647 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Sep 4 23:46:21.115655 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 4 23:46:21.115661 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Sep 4 23:46:21.115668 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 4 23:46:21.115675 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 4 23:46:21.115682 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 4 23:46:21.115688 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 4 23:46:21.115695 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 4 23:46:21.115704 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 4 23:46:21.115715 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 4 23:46:21.115722 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 4 23:46:21.115729 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 4 23:46:21.115736 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 23:46:21.115746 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 23:46:21.115753 kernel: NX (Execute Disable) protection: active Sep 4 23:46:21.115760 kernel: APIC: Static calls initialized Sep 4 23:46:21.115767 kernel: e820: update [mem 0x9a185018-0x9a18ec57] usable ==> usable Sep 4 23:46:21.115775 kernel: e820: update [mem 0x9a185018-0x9a18ec57] usable ==> usable Sep 4 23:46:21.115782 kernel: e820: update [mem 0x9a148018-0x9a184e57] usable ==> usable Sep 4 23:46:21.115789 kernel: e820: update [mem 0x9a148018-0x9a184e57] usable ==> usable Sep 4 23:46:21.115795 kernel: extended physical RAM map: Sep 4 23:46:21.115803 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 4 23:46:21.115810 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 4 23:46:21.115817 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Sep 4 23:46:21.115824 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 4 23:46:21.115834 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a148017] usable Sep 4 23:46:21.115841 kernel: reserve setup_data: [mem 0x000000009a148018-0x000000009a184e57] usable Sep 4 23:46:21.115848 kernel: reserve setup_data: [mem 0x000000009a184e58-0x000000009a185017] usable Sep 4 23:46:21.115855 kernel: reserve setup_data: [mem 0x000000009a185018-0x000000009a18ec57] usable Sep 4 23:46:21.115862 kernel: reserve setup_data: [mem 0x000000009a18ec58-0x000000009b8ecfff] usable Sep 4 23:46:21.115869 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 4 23:46:21.115876 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 4 23:46:21.115883 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 4 23:46:21.115890 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 4 23:46:21.115898 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 4 23:46:21.115911 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 4 23:46:21.115918 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 4 23:46:21.115925 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 4 23:46:21.115933 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 4 23:46:21.115940 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 23:46:21.115947 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 23:46:21.115957 kernel: efi: EFI v2.7 by EDK II Sep 4 23:46:21.115965 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1f7018 RNG=0x9bb73018 Sep 4 23:46:21.115972 kernel: random: crng init done Sep 4 23:46:21.115979 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 4 23:46:21.115987 kernel: secureboot: Secure boot enabled Sep 4 23:46:21.115994 kernel: SMBIOS 2.8 present. Sep 4 23:46:21.116001 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 4 23:46:21.116009 kernel: Hypervisor detected: KVM Sep 4 23:46:21.116016 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 23:46:21.116024 kernel: kvm-clock: using sched offset of 7346694761 cycles Sep 4 23:46:21.116031 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 23:46:21.116041 kernel: tsc: Detected 2794.750 MHz processor Sep 4 23:46:21.116049 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 23:46:21.116057 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 23:46:21.116064 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Sep 4 23:46:21.116072 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 23:46:21.116080 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 23:46:21.116087 kernel: Using GB pages for direct mapping Sep 4 23:46:21.116104 kernel: ACPI: Early table checksum verification disabled Sep 4 23:46:21.116112 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Sep 4 23:46:21.116122 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 4 23:46:21.116130 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:46:21.116140 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:46:21.116147 kernel: ACPI: FACS 0x000000009BBDD000 000040 Sep 4 23:46:21.116155 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:46:21.116163 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:46:21.116170 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:46:21.116178 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:46:21.116185 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 4 23:46:21.116195 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Sep 4 23:46:21.116203 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Sep 4 23:46:21.116210 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Sep 4 23:46:21.116218 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Sep 4 23:46:21.116225 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Sep 4 23:46:21.116233 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Sep 4 23:46:21.116241 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Sep 4 23:46:21.116248 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Sep 4 23:46:21.116255 kernel: No NUMA configuration found Sep 4 23:46:21.116265 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Sep 4 23:46:21.116273 kernel: NODE_DATA(0) allocated [mem 0x9bf59000-0x9bf5efff] Sep 4 23:46:21.116281 kernel: Zone ranges: Sep 4 23:46:21.116288 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 23:46:21.116296 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Sep 4 23:46:21.116303 kernel: Normal empty Sep 4 23:46:21.116328 kernel: Movable zone start for each node Sep 4 23:46:21.116336 kernel: Early memory node ranges Sep 4 23:46:21.116343 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Sep 4 23:46:21.116354 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Sep 4 23:46:21.116361 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Sep 4 23:46:21.116369 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Sep 4 23:46:21.116376 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Sep 4 23:46:21.116384 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Sep 4 23:46:21.116391 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 23:46:21.116399 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Sep 4 23:46:21.116407 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 23:46:21.116414 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 4 23:46:21.116424 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 4 23:46:21.116432 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Sep 4 23:46:21.116439 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 23:46:21.116447 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 23:46:21.116454 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 23:46:21.116462 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 23:46:21.116469 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 23:46:21.116477 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 23:46:21.116484 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 23:46:21.116494 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 23:46:21.116502 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 23:46:21.116509 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 23:46:21.116517 kernel: TSC deadline timer available Sep 4 23:46:21.116524 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 4 23:46:21.116532 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 23:46:21.116540 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 23:46:21.116557 kernel: kvm-guest: setup PV sched yield Sep 4 23:46:21.116564 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 4 23:46:21.116572 kernel: Booting paravirtualized kernel on KVM Sep 4 23:46:21.116580 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 23:46:21.116588 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 23:46:21.116598 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 4 23:46:21.116606 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 4 23:46:21.116613 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 23:46:21.116621 kernel: kvm-guest: PV spinlocks enabled Sep 4 23:46:21.116631 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 23:46:21.116641 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:46:21.116649 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 23:46:21.116657 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 23:46:21.116668 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 23:46:21.116675 kernel: Fallback order for Node 0: 0 Sep 4 23:46:21.116683 kernel: Built 1 zonelists, mobility grouping on. Total pages: 625927 Sep 4 23:46:21.116691 kernel: Policy zone: DMA32 Sep 4 23:46:21.116699 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 23:46:21.116709 kernel: Memory: 2370352K/2552216K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43508K init, 1568K bss, 181608K reserved, 0K cma-reserved) Sep 4 23:46:21.116717 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 23:46:21.116725 kernel: ftrace: allocating 37943 entries in 149 pages Sep 4 23:46:21.116733 kernel: ftrace: allocated 149 pages with 4 groups Sep 4 23:46:21.116741 kernel: Dynamic Preempt: voluntary Sep 4 23:46:21.116749 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 23:46:21.116758 kernel: rcu: RCU event tracing is enabled. Sep 4 23:46:21.116766 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 23:46:21.116774 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 23:46:21.116784 kernel: Rude variant of Tasks RCU enabled. Sep 4 23:46:21.116792 kernel: Tracing variant of Tasks RCU enabled. Sep 4 23:46:21.116800 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 23:46:21.116808 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 23:46:21.116816 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 23:46:21.116823 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 23:46:21.116831 kernel: Console: colour dummy device 80x25 Sep 4 23:46:21.116839 kernel: printk: console [ttyS0] enabled Sep 4 23:46:21.116847 kernel: ACPI: Core revision 20230628 Sep 4 23:46:21.116857 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 23:46:21.116865 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 23:46:21.116873 kernel: x2apic enabled Sep 4 23:46:21.116881 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 23:46:21.116889 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 23:46:21.116897 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 23:46:21.116905 kernel: kvm-guest: setup PV IPIs Sep 4 23:46:21.116913 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 23:46:21.116920 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 4 23:46:21.116931 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 4 23:46:21.116939 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 23:46:21.116947 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 23:46:21.116954 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 23:46:21.116962 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 23:46:21.116970 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 23:46:21.116978 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 23:46:21.116986 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 23:46:21.116994 kernel: active return thunk: retbleed_return_thunk Sep 4 23:46:21.117004 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 23:46:21.117012 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 23:46:21.117020 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 23:46:21.117028 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 23:46:21.117036 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 23:46:21.117044 kernel: active return thunk: srso_return_thunk Sep 4 23:46:21.117052 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 23:46:21.117063 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 23:46:21.117073 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 23:46:21.117081 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 23:46:21.117089 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 23:46:21.117104 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 23:46:21.117113 kernel: Freeing SMP alternatives memory: 32K Sep 4 23:46:21.117120 kernel: pid_max: default: 32768 minimum: 301 Sep 4 23:46:21.117128 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 23:46:21.117136 kernel: landlock: Up and running. Sep 4 23:46:21.117144 kernel: SELinux: Initializing. Sep 4 23:46:21.117154 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:46:21.117162 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:46:21.117171 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 23:46:21.117182 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 23:46:21.117192 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 23:46:21.117203 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 23:46:21.117214 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 23:46:21.117225 kernel: ... version: 0 Sep 4 23:46:21.117235 kernel: ... bit width: 48 Sep 4 23:46:21.117247 kernel: ... generic registers: 6 Sep 4 23:46:21.117255 kernel: ... value mask: 0000ffffffffffff Sep 4 23:46:21.117265 kernel: ... max period: 00007fffffffffff Sep 4 23:46:21.117275 kernel: ... fixed-purpose events: 0 Sep 4 23:46:21.117285 kernel: ... event mask: 000000000000003f Sep 4 23:46:21.117296 kernel: signal: max sigframe size: 1776 Sep 4 23:46:21.117306 kernel: rcu: Hierarchical SRCU implementation. Sep 4 23:46:21.117328 kernel: rcu: Max phase no-delay instances is 400. Sep 4 23:46:21.117336 kernel: smp: Bringing up secondary CPUs ... Sep 4 23:46:21.117347 kernel: smpboot: x86: Booting SMP configuration: Sep 4 23:46:21.117355 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 23:46:21.117363 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 23:46:21.117371 kernel: smpboot: Max logical packages: 1 Sep 4 23:46:21.117379 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 4 23:46:21.117386 kernel: devtmpfs: initialized Sep 4 23:46:21.117394 kernel: x86/mm: Memory block size: 128MB Sep 4 23:46:21.117402 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Sep 4 23:46:21.117410 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Sep 4 23:46:21.117418 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 23:46:21.117428 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 23:46:21.117436 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 23:46:21.117444 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 23:46:21.117452 kernel: audit: initializing netlink subsys (disabled) Sep 4 23:46:21.117461 kernel: audit: type=2000 audit(1757029579.263:1): state=initialized audit_enabled=0 res=1 Sep 4 23:46:21.117469 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 23:46:21.117476 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 23:46:21.117484 kernel: cpuidle: using governor menu Sep 4 23:46:21.117494 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 23:46:21.117502 kernel: dca service started, version 1.12.1 Sep 4 23:46:21.117510 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 4 23:46:21.117518 kernel: PCI: Using configuration type 1 for base access Sep 4 23:46:21.117526 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 23:46:21.117534 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 23:46:21.117542 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 23:46:21.117550 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 23:46:21.117557 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 23:46:21.117568 kernel: ACPI: Added _OSI(Module Device) Sep 4 23:46:21.117575 kernel: ACPI: Added _OSI(Processor Device) Sep 4 23:46:21.117583 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 23:46:21.117591 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 23:46:21.117599 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 23:46:21.117607 kernel: ACPI: Interpreter enabled Sep 4 23:46:21.117614 kernel: ACPI: PM: (supports S0 S5) Sep 4 23:46:21.117622 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 23:46:21.117630 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 23:46:21.117641 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 23:46:21.117649 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 4 23:46:21.117657 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 23:46:21.117878 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 23:46:21.118021 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 4 23:46:21.118158 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 4 23:46:21.118170 kernel: PCI host bridge to bus 0000:00 Sep 4 23:46:21.118326 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 23:46:21.118455 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 23:46:21.118572 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 23:46:21.118688 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 4 23:46:21.118804 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 4 23:46:21.118919 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 4 23:46:21.119034 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 23:46:21.119210 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 4 23:46:21.119397 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 4 23:46:21.119528 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 4 23:46:21.119654 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 4 23:46:21.119782 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 4 23:46:21.119909 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 4 23:46:21.120037 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 23:46:21.120227 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 23:46:21.120382 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 4 23:46:21.120514 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 4 23:46:21.120641 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Sep 4 23:46:21.120793 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 4 23:46:21.120924 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 4 23:46:21.121060 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 4 23:46:21.121200 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Sep 4 23:46:21.121370 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 4 23:46:21.121501 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 4 23:46:21.121629 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 4 23:46:21.121760 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 4 23:46:21.121890 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 4 23:46:21.122039 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 4 23:46:21.122179 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 4 23:46:21.122341 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 4 23:46:21.122475 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 4 23:46:21.122602 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 4 23:46:21.122753 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 4 23:46:21.122885 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 4 23:46:21.122901 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 23:46:21.122909 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 23:46:21.122917 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 23:46:21.122925 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 23:46:21.122933 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 4 23:46:21.122941 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 4 23:46:21.122949 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 4 23:46:21.122956 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 4 23:46:21.122967 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 4 23:46:21.122975 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 4 23:46:21.122983 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 4 23:46:21.122990 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 4 23:46:21.122998 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 4 23:46:21.123006 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 4 23:46:21.123014 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 4 23:46:21.123022 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 4 23:46:21.123030 kernel: iommu: Default domain type: Translated Sep 4 23:46:21.123038 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 23:46:21.123049 kernel: efivars: Registered efivars operations Sep 4 23:46:21.123058 kernel: PCI: Using ACPI for IRQ routing Sep 4 23:46:21.123066 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 23:46:21.123073 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Sep 4 23:46:21.123081 kernel: e820: reserve RAM buffer [mem 0x9a148018-0x9bffffff] Sep 4 23:46:21.123089 kernel: e820: reserve RAM buffer [mem 0x9a185018-0x9bffffff] Sep 4 23:46:21.123106 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Sep 4 23:46:21.123113 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Sep 4 23:46:21.123243 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 4 23:46:21.123389 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 4 23:46:21.123524 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 23:46:21.123535 kernel: vgaarb: loaded Sep 4 23:46:21.123544 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 23:46:21.123552 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 23:46:21.123560 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 23:46:21.123567 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 23:46:21.123575 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 23:46:21.123587 kernel: pnp: PnP ACPI init Sep 4 23:46:21.123752 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 4 23:46:21.123765 kernel: pnp: PnP ACPI: found 6 devices Sep 4 23:46:21.123773 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 23:46:21.123781 kernel: NET: Registered PF_INET protocol family Sep 4 23:46:21.123789 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 23:46:21.123797 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 23:46:21.123805 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 23:46:21.123816 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 23:46:21.123824 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 23:46:21.123832 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 23:46:21.123840 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:46:21.123848 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:46:21.123856 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 23:46:21.123864 kernel: NET: Registered PF_XDP protocol family Sep 4 23:46:21.123994 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 4 23:46:21.124136 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 4 23:46:21.124277 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 23:46:21.124412 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 23:46:21.124529 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 23:46:21.124644 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 4 23:46:21.124785 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 4 23:46:21.124958 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 4 23:46:21.124971 kernel: PCI: CLS 0 bytes, default 64 Sep 4 23:46:21.124979 kernel: Initialise system trusted keyrings Sep 4 23:46:21.124992 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 23:46:21.125000 kernel: Key type asymmetric registered Sep 4 23:46:21.125007 kernel: Asymmetric key parser 'x509' registered Sep 4 23:46:21.125015 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 23:46:21.125023 kernel: io scheduler mq-deadline registered Sep 4 23:46:21.125031 kernel: io scheduler kyber registered Sep 4 23:46:21.125039 kernel: io scheduler bfq registered Sep 4 23:46:21.125047 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 23:46:21.125073 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 4 23:46:21.125085 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 4 23:46:21.125106 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 4 23:46:21.125114 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 23:46:21.125122 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 23:46:21.125131 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 23:46:21.125139 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 23:46:21.125147 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 23:46:21.125156 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 23:46:21.125334 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 4 23:46:21.125465 kernel: rtc_cmos 00:04: registered as rtc0 Sep 4 23:46:21.125585 kernel: rtc_cmos 00:04: setting system clock to 2025-09-04T23:46:20 UTC (1757029580) Sep 4 23:46:21.125705 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 4 23:46:21.125716 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 23:46:21.125725 kernel: efifb: probing for efifb Sep 4 23:46:21.125733 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 4 23:46:21.125741 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 4 23:46:21.125753 kernel: efifb: scrolling: redraw Sep 4 23:46:21.125761 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 23:46:21.125769 kernel: Console: switching to colour frame buffer device 160x50 Sep 4 23:46:21.125777 kernel: fb0: EFI VGA frame buffer device Sep 4 23:46:21.125786 kernel: pstore: Using crash dump compression: deflate Sep 4 23:46:21.125794 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 23:46:21.125802 kernel: NET: Registered PF_INET6 protocol family Sep 4 23:46:21.125810 kernel: Segment Routing with IPv6 Sep 4 23:46:21.125819 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 23:46:21.125827 kernel: NET: Registered PF_PACKET protocol family Sep 4 23:46:21.125837 kernel: Key type dns_resolver registered Sep 4 23:46:21.125848 kernel: IPI shorthand broadcast: enabled Sep 4 23:46:21.125856 kernel: sched_clock: Marking stable (1533002464, 709499431)->(2398122038, -155620143) Sep 4 23:46:21.125864 kernel: registered taskstats version 1 Sep 4 23:46:21.125872 kernel: Loading compiled-in X.509 certificates Sep 4 23:46:21.125883 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: f395d469db1520f53594f6c4948c5f8002e6cc8b' Sep 4 23:46:21.125891 kernel: Key type .fscrypt registered Sep 4 23:46:21.125900 kernel: Key type fscrypt-provisioning registered Sep 4 23:46:21.125908 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 23:46:21.125916 kernel: ima: Allocated hash algorithm: sha1 Sep 4 23:46:21.125924 kernel: ima: No architecture policies found Sep 4 23:46:21.125933 kernel: clk: Disabling unused clocks Sep 4 23:46:21.125941 kernel: Freeing unused kernel image (initmem) memory: 43508K Sep 4 23:46:21.125949 kernel: Write protecting the kernel read-only data: 38912k Sep 4 23:46:21.125960 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 4 23:46:21.125968 kernel: Run /init as init process Sep 4 23:46:21.125976 kernel: with arguments: Sep 4 23:46:21.125984 kernel: /init Sep 4 23:46:21.125992 kernel: with environment: Sep 4 23:46:21.126000 kernel: HOME=/ Sep 4 23:46:21.126008 kernel: TERM=linux Sep 4 23:46:21.126016 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 23:46:21.126029 systemd[1]: Successfully made /usr/ read-only. Sep 4 23:46:21.126043 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:46:21.126052 systemd[1]: Detected virtualization kvm. Sep 4 23:46:21.126061 systemd[1]: Detected architecture x86-64. Sep 4 23:46:21.126070 systemd[1]: Running in initrd. Sep 4 23:46:21.126078 systemd[1]: No hostname configured, using default hostname. Sep 4 23:46:21.126087 systemd[1]: Hostname set to . Sep 4 23:46:21.126111 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:46:21.126127 systemd[1]: Queued start job for default target initrd.target. Sep 4 23:46:21.126139 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:46:21.126151 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:46:21.126163 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 23:46:21.126175 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:46:21.126186 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 23:46:21.126199 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 23:46:21.126214 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 23:46:21.126223 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 23:46:21.126231 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:46:21.126240 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:46:21.126249 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:46:21.126258 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:46:21.126267 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:46:21.126278 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:46:21.126289 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:46:21.126298 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:46:21.126309 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 23:46:21.126330 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 23:46:21.126339 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:46:21.126348 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:46:21.126356 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:46:21.126366 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:46:21.126377 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 23:46:21.126393 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:46:21.126404 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 23:46:21.126422 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 23:46:21.126444 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:46:21.126460 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:46:21.126477 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:46:21.126493 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 23:46:21.126509 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:46:21.126532 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 23:46:21.126553 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:46:21.126615 systemd-journald[194]: Collecting audit messages is disabled. Sep 4 23:46:21.126642 systemd-journald[194]: Journal started Sep 4 23:46:21.126663 systemd-journald[194]: Runtime Journal (/run/log/journal/8abe6f9e457a459db5cf1c55c5b68367) is 6M, max 48M, 42M free. Sep 4 23:46:21.126703 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:46:21.127678 systemd-modules-load[195]: Inserted module 'overlay' Sep 4 23:46:21.130919 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:46:21.133574 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:46:21.146540 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:46:21.149441 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:46:21.152656 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:46:21.161056 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:46:21.167340 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 23:46:21.169751 systemd-modules-load[195]: Inserted module 'br_netfilter' Sep 4 23:46:21.170721 kernel: Bridge firewalling registered Sep 4 23:46:21.180515 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:46:21.181987 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:46:21.193457 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 23:46:21.195193 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:46:21.200116 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:46:21.208016 dracut-cmdline[225]: dracut-dracut-053 Sep 4 23:46:21.211472 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:46:21.211777 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:46:21.225541 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:46:21.264946 systemd-resolved[249]: Positive Trust Anchors: Sep 4 23:46:21.264981 systemd-resolved[249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:46:21.265023 systemd-resolved[249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:46:21.268716 systemd-resolved[249]: Defaulting to hostname 'linux'. Sep 4 23:46:21.270051 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:46:21.275599 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:46:21.372372 kernel: SCSI subsystem initialized Sep 4 23:46:21.382356 kernel: Loading iSCSI transport class v2.0-870. Sep 4 23:46:21.440354 kernel: iscsi: registered transport (tcp) Sep 4 23:46:21.499350 kernel: iscsi: registered transport (qla4xxx) Sep 4 23:46:21.499420 kernel: QLogic iSCSI HBA Driver Sep 4 23:46:21.555301 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 23:46:21.594463 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 23:46:21.618826 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 23:46:21.618859 kernel: device-mapper: uevent: version 1.0.3 Sep 4 23:46:21.619881 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 23:46:21.676350 kernel: raid6: avx2x4 gen() 22741 MB/s Sep 4 23:46:21.693348 kernel: raid6: avx2x2 gen() 22261 MB/s Sep 4 23:46:21.748345 kernel: raid6: avx2x1 gen() 17636 MB/s Sep 4 23:46:21.748379 kernel: raid6: using algorithm avx2x4 gen() 22741 MB/s Sep 4 23:46:21.765509 kernel: raid6: .... xor() 5345 MB/s, rmw enabled Sep 4 23:46:21.765535 kernel: raid6: using avx2x2 recovery algorithm Sep 4 23:46:21.807350 kernel: xor: automatically using best checksumming function avx Sep 4 23:46:22.009353 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 23:46:22.024545 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:46:22.036659 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:46:22.055162 systemd-udevd[414]: Using default interface naming scheme 'v255'. Sep 4 23:46:22.060908 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:46:22.079511 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 23:46:22.094278 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Sep 4 23:46:22.131097 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:46:22.151566 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:46:22.224579 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:46:22.230517 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 23:46:22.252291 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 23:46:22.253267 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:46:22.263759 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:46:22.264076 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:46:22.271459 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 23:46:22.282694 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:46:22.285606 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 23:46:22.305533 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 23:46:22.305599 kernel: AES CTR mode by8 optimization enabled Sep 4 23:46:22.310966 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 23:46:22.311307 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 23:46:22.316830 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 23:46:22.316863 kernel: GPT:9289727 != 19775487 Sep 4 23:46:22.316877 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 23:46:22.316889 kernel: GPT:9289727 != 19775487 Sep 4 23:46:22.316899 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 23:46:22.316909 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:46:22.321338 kernel: libata version 3.00 loaded. Sep 4 23:46:22.330558 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:46:22.343073 kernel: ahci 0000:00:1f.2: version 3.0 Sep 4 23:46:22.343300 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 4 23:46:22.330874 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:46:22.376073 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 4 23:46:22.376389 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 4 23:46:22.376545 kernel: BTRFS: device fsid 185ffa67-4184-4488-b7c8-7c0711a63b2d devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (475) Sep 4 23:46:22.376557 kernel: scsi host0: ahci Sep 4 23:46:22.378832 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:46:22.382820 kernel: scsi host1: ahci Sep 4 23:46:22.380092 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:46:22.380388 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:46:22.384278 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:46:22.390080 kernel: scsi host2: ahci Sep 4 23:46:22.390389 kernel: scsi host3: ahci Sep 4 23:46:22.390539 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (459) Sep 4 23:46:22.391465 kernel: scsi host4: ahci Sep 4 23:46:22.396681 kernel: scsi host5: ahci Sep 4 23:46:22.396918 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 4 23:46:22.396935 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 4 23:46:22.397691 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 4 23:46:22.398939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:46:22.404083 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 4 23:46:22.404107 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 4 23:46:22.404120 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 4 23:46:22.425839 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 23:46:22.436821 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 23:46:22.476674 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 23:46:22.486809 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 23:46:22.486914 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 23:46:22.620985 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 23:46:22.622891 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:46:22.623030 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:46:22.626156 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:46:22.627826 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:46:22.688086 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:46:22.701612 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:46:22.732654 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 4 23:46:22.732724 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 4 23:46:22.732736 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 4 23:46:22.732746 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 4 23:46:22.733022 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:46:22.738674 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 4 23:46:22.738690 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 4 23:46:22.738701 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 23:46:22.738711 kernel: ata3.00: applying bridge limits Sep 4 23:46:22.738721 kernel: ata3.00: configured for UDMA/100 Sep 4 23:46:22.741363 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 23:46:22.785349 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 23:46:22.786009 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 23:46:22.798378 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 4 23:46:22.814448 disk-uuid[566]: Primary Header is updated. Sep 4 23:46:22.814448 disk-uuid[566]: Secondary Entries is updated. Sep 4 23:46:22.814448 disk-uuid[566]: Secondary Header is updated. Sep 4 23:46:22.819354 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:46:22.825359 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:46:23.849065 disk-uuid[582]: The operation has completed successfully. Sep 4 23:46:23.850584 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:46:23.883822 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 23:46:23.883970 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 23:46:23.932596 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 23:46:23.936062 sh[598]: Success Sep 4 23:46:23.952342 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 4 23:46:23.997977 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 23:46:24.018435 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 23:46:24.023723 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 23:46:24.033742 kernel: BTRFS info (device dm-0): first mount of filesystem 185ffa67-4184-4488-b7c8-7c0711a63b2d Sep 4 23:46:24.033778 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:46:24.033790 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 23:46:24.034746 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 23:46:24.035485 kernel: BTRFS info (device dm-0): using free space tree Sep 4 23:46:24.041443 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 23:46:24.075653 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 23:46:24.086545 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 23:46:24.088550 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 23:46:24.107480 kernel: BTRFS info (device vda6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:46:24.107539 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:46:24.107555 kernel: BTRFS info (device vda6): using free space tree Sep 4 23:46:24.110347 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 23:46:24.116389 kernel: BTRFS info (device vda6): last unmount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:46:24.210883 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:46:24.227486 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:46:24.258636 systemd-networkd[774]: lo: Link UP Sep 4 23:46:24.258648 systemd-networkd[774]: lo: Gained carrier Sep 4 23:46:24.260497 systemd-networkd[774]: Enumeration completed Sep 4 23:46:24.260622 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:46:24.260900 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:46:24.260905 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:46:24.282860 systemd-networkd[774]: eth0: Link UP Sep 4 23:46:24.282865 systemd-networkd[774]: eth0: Gained carrier Sep 4 23:46:24.282877 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:46:24.284696 systemd[1]: Reached target network.target - Network. Sep 4 23:46:24.308444 systemd-networkd[774]: eth0: DHCPv4 address 10.0.0.62/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 23:46:24.766485 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 23:46:24.835581 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 23:46:25.140073 ignition[780]: Ignition 2.20.0 Sep 4 23:46:25.140087 ignition[780]: Stage: fetch-offline Sep 4 23:46:25.140146 ignition[780]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:46:25.140157 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:46:25.140305 ignition[780]: parsed url from cmdline: "" Sep 4 23:46:25.140323 ignition[780]: no config URL provided Sep 4 23:46:25.140330 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:46:25.140340 ignition[780]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:46:25.140371 ignition[780]: op(1): [started] loading QEMU firmware config module Sep 4 23:46:25.140376 ignition[780]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 23:46:25.165814 ignition[780]: op(1): [finished] loading QEMU firmware config module Sep 4 23:46:25.207215 ignition[780]: parsing config with SHA512: 515a5f7cc399a5bb30b8b2f1270689ffef5fce9618d45cd182e040143e92d9855698f07d23f5f8d942de62ad10486519ab7fc107327077975c8d9b4ac088eb31 Sep 4 23:46:25.216030 unknown[780]: fetched base config from "system" Sep 4 23:46:25.216047 unknown[780]: fetched user config from "qemu" Sep 4 23:46:25.216536 ignition[780]: fetch-offline: fetch-offline passed Sep 4 23:46:25.231424 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:46:25.216624 ignition[780]: Ignition finished successfully Sep 4 23:46:25.233884 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 23:46:25.247675 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 23:46:25.267590 ignition[791]: Ignition 2.20.0 Sep 4 23:46:25.267604 ignition[791]: Stage: kargs Sep 4 23:46:25.267767 ignition[791]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:46:25.267778 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:46:25.271639 ignition[791]: kargs: kargs passed Sep 4 23:46:25.271691 ignition[791]: Ignition finished successfully Sep 4 23:46:25.276098 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 23:46:25.285512 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 23:46:25.300849 ignition[799]: Ignition 2.20.0 Sep 4 23:46:25.300865 ignition[799]: Stage: disks Sep 4 23:46:25.329709 ignition[799]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:46:25.329748 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:46:25.333868 ignition[799]: disks: disks passed Sep 4 23:46:25.334680 ignition[799]: Ignition finished successfully Sep 4 23:46:25.338166 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 23:46:25.340754 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 23:46:25.340861 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 23:46:25.346242 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:46:25.347497 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:46:25.348743 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:46:25.368449 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 23:46:25.408562 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 23:46:25.422531 systemd-networkd[774]: eth0: Gained IPv6LL Sep 4 23:46:26.000459 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 23:46:26.019477 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 23:46:26.506341 kernel: EXT4-fs (vda9): mounted filesystem 86dd2c20-900e-43ec-8fda-e9f0f484a013 r/w with ordered data mode. Quota mode: none. Sep 4 23:46:26.507509 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 23:46:26.529026 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 23:46:26.542492 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:46:26.544896 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 23:46:26.547487 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 23:46:26.547559 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 23:46:26.570282 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (818) Sep 4 23:46:26.547597 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:46:26.568083 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 23:46:26.577540 kernel: BTRFS info (device vda6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:46:26.577569 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:46:26.577581 kernel: BTRFS info (device vda6): using free space tree Sep 4 23:46:26.577591 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 23:46:26.571259 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 23:46:26.580384 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:46:26.678106 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 23:46:26.684816 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Sep 4 23:46:26.706082 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 23:46:26.711695 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 23:46:26.921274 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 23:46:26.938446 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 23:46:26.948108 kernel: BTRFS info (device vda6): last unmount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:46:26.945545 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 23:46:26.948821 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 23:46:27.040582 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 23:46:27.078132 ignition[934]: INFO : Ignition 2.20.0 Sep 4 23:46:27.078132 ignition[934]: INFO : Stage: mount Sep 4 23:46:27.086250 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:46:27.086250 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:46:27.089031 ignition[934]: INFO : mount: mount passed Sep 4 23:46:27.089794 ignition[934]: INFO : Ignition finished successfully Sep 4 23:46:27.092519 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 23:46:27.101491 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 23:46:27.517696 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:46:27.527352 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (945) Sep 4 23:46:27.529577 kernel: BTRFS info (device vda6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:46:27.529613 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:46:27.529629 kernel: BTRFS info (device vda6): using free space tree Sep 4 23:46:27.533353 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 23:46:27.542382 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:46:27.579648 ignition[962]: INFO : Ignition 2.20.0 Sep 4 23:46:27.579648 ignition[962]: INFO : Stage: files Sep 4 23:46:27.581577 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:46:27.581577 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:46:27.584206 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Sep 4 23:46:27.586172 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 23:46:27.586172 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 23:46:27.592228 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 23:46:27.593953 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 23:46:27.596115 unknown[962]: wrote ssh authorized keys file for user: core Sep 4 23:46:27.597406 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 23:46:27.599808 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 4 23:46:27.602156 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 4 23:46:27.664958 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 23:46:27.901039 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 4 23:46:27.901039 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 23:46:27.972008 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 23:46:27.972008 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:46:27.972008 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:46:27.972008 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:46:27.972008 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:46:27.972008 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:46:27.972008 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:46:27.972008 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:46:27.972008 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:46:27.972008 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 4 23:46:27.972008 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 4 23:46:27.972008 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 4 23:46:27.972008 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 4 23:46:28.435162 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 23:46:29.478429 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 4 23:46:29.478429 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 23:46:29.502722 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:46:29.502722 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:46:29.502722 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 23:46:29.502722 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 4 23:46:29.502722 ignition[962]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 23:46:29.502722 ignition[962]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 23:46:29.502722 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 4 23:46:29.502722 ignition[962]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 23:46:29.581427 ignition[962]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 23:46:29.589214 ignition[962]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 23:46:29.635258 ignition[962]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 23:46:29.635258 ignition[962]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 4 23:46:29.635258 ignition[962]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 23:46:29.635258 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:46:29.635258 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:46:29.635258 ignition[962]: INFO : files: files passed Sep 4 23:46:29.635258 ignition[962]: INFO : Ignition finished successfully Sep 4 23:46:29.592605 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 23:46:29.649660 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 23:46:29.652038 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 23:46:29.653865 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 23:46:29.654077 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 23:46:29.663285 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 23:46:29.665906 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:46:29.667897 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:46:29.671099 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:46:29.669244 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:46:29.671442 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 23:46:29.683634 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 23:46:29.719045 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 23:46:29.719202 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 23:46:29.721771 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 23:46:29.724139 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 23:46:29.726356 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 23:46:29.727289 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 23:46:29.747259 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:46:29.759455 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 23:46:29.769886 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:46:29.771112 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:46:29.773300 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 23:46:29.775335 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 23:46:29.775455 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:46:29.777677 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 23:46:29.779193 systemd[1]: Stopped target basic.target - Basic System. Sep 4 23:46:29.781168 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 23:46:29.783147 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:46:29.785178 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 23:46:29.787340 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 23:46:29.789445 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:46:29.791690 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 23:46:29.793675 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 23:46:29.795927 systemd[1]: Stopped target swap.target - Swaps. Sep 4 23:46:29.797951 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 23:46:29.798118 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:46:29.800651 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:46:29.802108 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:46:29.804147 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 23:46:29.804341 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:46:29.806536 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 23:46:29.806667 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 23:46:29.809264 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 23:46:29.809391 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:46:29.811410 systemd[1]: Stopped target paths.target - Path Units. Sep 4 23:46:29.813278 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 23:46:29.817357 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:46:29.819707 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 23:46:29.822233 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 23:46:29.824520 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 23:46:29.824620 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:46:29.827224 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 23:46:29.827322 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:46:29.829954 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 23:46:29.830103 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:46:29.832297 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 23:46:29.832417 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 23:46:29.892603 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 23:46:29.894676 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 23:46:29.895775 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 23:46:29.895936 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:46:29.898262 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 23:46:29.898431 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:46:29.906603 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 23:46:29.906744 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 23:46:29.969886 ignition[1016]: INFO : Ignition 2.20.0 Sep 4 23:46:29.969886 ignition[1016]: INFO : Stage: umount Sep 4 23:46:29.973486 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:46:29.973486 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:46:29.973486 ignition[1016]: INFO : umount: umount passed Sep 4 23:46:29.973486 ignition[1016]: INFO : Ignition finished successfully Sep 4 23:46:29.969942 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 23:46:29.978627 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 23:46:29.978760 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 23:46:29.980655 systemd[1]: Stopped target network.target - Network. Sep 4 23:46:29.982852 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 23:46:29.982973 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 23:46:29.985229 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 23:46:29.985302 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 23:46:29.987658 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 23:46:29.987742 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 23:46:29.990078 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 23:46:29.990157 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 23:46:29.992760 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 23:46:29.995198 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 23:46:29.997978 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 23:46:29.998137 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 23:46:30.001005 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 23:46:30.001118 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 23:46:30.003262 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 23:46:30.003417 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 23:46:30.008373 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 23:46:30.008593 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 23:46:30.008720 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 23:46:30.012530 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 23:46:30.013663 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 23:46:30.013738 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:46:30.025068 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 23:46:30.027414 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 23:46:30.027509 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:46:30.030066 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:46:30.030127 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:46:30.033136 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 23:46:30.033190 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 23:46:30.035686 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 23:46:30.035750 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:46:30.038575 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:46:30.043943 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 23:46:30.044042 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:46:30.054879 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 23:46:30.055083 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 23:46:30.060414 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 23:46:30.060637 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:46:30.063404 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 23:46:30.063470 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 23:46:30.065777 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 23:46:30.065834 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:46:30.068055 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 23:46:30.068118 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:46:30.070624 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 23:46:30.070677 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 23:46:30.072967 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:46:30.073024 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:46:30.107476 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 23:46:30.109910 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 23:46:30.109990 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:46:30.113288 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 23:46:30.113360 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:46:30.116301 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 23:46:30.116375 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:46:30.118005 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:46:30.118060 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:46:30.122114 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 23:46:30.122187 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:46:30.122603 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 23:46:30.122722 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 23:46:30.126298 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 23:46:30.141528 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 23:46:30.150703 systemd[1]: Switching root. Sep 4 23:46:30.190707 systemd-journald[194]: Journal stopped Sep 4 23:46:34.408827 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Sep 4 23:46:34.408910 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 23:46:34.408931 kernel: SELinux: policy capability open_perms=1 Sep 4 23:46:34.408954 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 23:46:34.408969 kernel: SELinux: policy capability always_check_network=0 Sep 4 23:46:34.408984 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 23:46:34.409003 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 23:46:34.409018 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 23:46:34.409033 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 23:46:34.409049 kernel: audit: type=1403 audit(1757029592.669:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 23:46:34.409071 systemd[1]: Successfully loaded SELinux policy in 200.243ms. Sep 4 23:46:34.409125 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.200ms. Sep 4 23:46:34.409160 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:46:34.409177 systemd[1]: Detected virtualization kvm. Sep 4 23:46:34.409193 systemd[1]: Detected architecture x86-64. Sep 4 23:46:34.409209 systemd[1]: Detected first boot. Sep 4 23:46:34.409224 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:46:34.409240 zram_generator::config[1062]: No configuration found. Sep 4 23:46:34.409256 kernel: Guest personality initialized and is inactive Sep 4 23:46:34.409270 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 23:46:34.409285 kernel: Initialized host personality Sep 4 23:46:34.409308 kernel: NET: Registered PF_VSOCK protocol family Sep 4 23:46:34.409340 systemd[1]: Populated /etc with preset unit settings. Sep 4 23:46:34.409360 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 23:46:34.409376 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 23:46:34.409392 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 23:46:34.409408 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 23:46:34.409423 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 23:46:34.409439 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 23:46:34.409463 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 23:46:34.409489 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 23:46:34.409506 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 23:46:34.409522 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 23:46:34.409539 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 23:46:34.409554 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 23:46:34.409570 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:46:34.409586 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:46:34.409602 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 23:46:34.409626 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 23:46:34.409642 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 23:46:34.409659 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:46:34.409676 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 23:46:34.409691 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:46:34.409713 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 23:46:34.409733 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 23:46:34.409768 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 23:46:34.409786 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 23:46:34.409802 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:46:34.409818 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:46:34.409834 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:46:34.409850 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:46:34.409866 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 23:46:34.409882 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 23:46:34.409898 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 23:46:34.409914 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:46:34.409938 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:46:34.409955 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:46:34.409970 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 23:46:34.409986 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 23:46:34.410002 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 23:46:34.410018 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 23:46:34.410034 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:46:34.410051 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 23:46:34.410067 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 23:46:34.410091 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 23:46:34.410111 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 23:46:34.410128 systemd[1]: Reached target machines.target - Containers. Sep 4 23:46:34.410144 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 23:46:34.410159 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:46:34.410175 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:46:34.410197 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 23:46:34.410214 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:46:34.410241 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:46:34.410260 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:46:34.410278 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 23:46:34.410294 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:46:34.410335 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 23:46:34.410354 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 23:46:34.410370 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 23:46:34.410386 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 23:46:34.410411 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 23:46:34.410428 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:46:34.410444 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:46:34.410460 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:46:34.410475 kernel: loop: module loaded Sep 4 23:46:34.410515 systemd-journald[1126]: Collecting audit messages is disabled. Sep 4 23:46:34.410548 systemd-journald[1126]: Journal started Sep 4 23:46:34.410585 systemd-journald[1126]: Runtime Journal (/run/log/journal/8abe6f9e457a459db5cf1c55c5b68367) is 6M, max 48M, 42M free. Sep 4 23:46:33.848921 systemd[1]: Queued start job for default target multi-user.target. Sep 4 23:46:33.861606 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 23:46:33.862079 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 23:46:33.862507 systemd[1]: systemd-journald.service: Consumed 1.052s CPU time. Sep 4 23:46:34.413347 kernel: fuse: init (API version 7.39) Sep 4 23:46:34.416337 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 23:46:34.419343 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 23:46:34.424563 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 23:46:34.459579 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:46:34.462389 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 23:46:34.462430 systemd[1]: Stopped verity-setup.service. Sep 4 23:46:34.466339 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:46:34.471143 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:46:34.472908 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 23:46:34.474720 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 23:46:34.476628 kernel: ACPI: bus type drm_connector registered Sep 4 23:46:34.476880 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 23:46:34.513710 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 23:46:34.515171 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 23:46:34.516541 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 23:46:34.518438 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:46:34.521416 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 23:46:34.521684 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 23:46:34.523872 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:46:34.524219 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:46:34.526116 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:46:34.526519 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:46:34.541218 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:46:34.541515 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:46:34.543231 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 23:46:34.543477 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 23:46:34.544902 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:46:34.545112 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:46:34.546708 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:46:34.548442 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 23:46:34.550154 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 23:46:34.552156 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 23:46:34.571412 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 23:46:34.582564 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 23:46:34.585669 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 23:46:34.597609 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 23:46:34.597662 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:46:34.599203 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 23:46:34.602187 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 23:46:34.604896 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 23:46:34.606181 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:46:34.660605 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 23:46:34.663951 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 23:46:34.665562 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:46:34.671863 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 23:46:34.673426 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:46:34.682041 systemd-journald[1126]: Time spent on flushing to /var/log/journal/8abe6f9e457a459db5cf1c55c5b68367 is 16.405ms for 1027 entries. Sep 4 23:46:34.682041 systemd-journald[1126]: System Journal (/var/log/journal/8abe6f9e457a459db5cf1c55c5b68367) is 8M, max 195.6M, 187.6M free. Sep 4 23:46:35.873705 systemd-journald[1126]: Received client request to flush runtime journal. Sep 4 23:46:35.873798 kernel: loop0: detected capacity change from 0 to 147912 Sep 4 23:46:35.873830 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 23:46:35.873847 kernel: loop1: detected capacity change from 0 to 138176 Sep 4 23:46:35.873866 kernel: loop2: detected capacity change from 0 to 229808 Sep 4 23:46:35.873885 kernel: loop3: detected capacity change from 0 to 147912 Sep 4 23:46:35.873902 kernel: loop4: detected capacity change from 0 to 138176 Sep 4 23:46:35.873918 kernel: loop5: detected capacity change from 0 to 229808 Sep 4 23:46:35.873936 zram_generator::config[1218]: No configuration found. Sep 4 23:46:35.873968 zram_generator::config[1291]: No configuration found. Sep 4 23:46:34.677465 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:46:35.874365 ldconfig[1163]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 23:46:34.680565 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 23:46:34.685334 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:46:34.690382 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:46:34.692115 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 23:46:34.694914 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 23:46:34.720139 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 23:46:34.729037 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 23:46:34.779920 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 23:46:34.779962 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Sep 4 23:46:34.779978 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Sep 4 23:46:34.781969 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:46:34.806819 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:46:35.242853 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 23:46:35.243976 (sd-merge)[1189]: Merged extensions into '/usr'. Sep 4 23:46:35.248629 systemd[1]: Reload requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 23:46:35.248641 systemd[1]: Reloading... Sep 4 23:46:35.490430 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:46:35.563303 systemd[1]: Reloading finished in 314 ms. Sep 4 23:46:35.583642 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 23:46:35.611465 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 23:46:35.616546 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 23:46:35.634383 systemd[1]: Starting ensure-sysext.service... Sep 4 23:46:35.659997 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 23:46:35.671868 systemd[1]: Reload requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Sep 4 23:46:35.671880 systemd[1]: Reloading... Sep 4 23:46:35.912023 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:46:35.983479 systemd[1]: Reloading finished in 311 ms. Sep 4 23:46:36.015796 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 23:46:36.023329 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 23:46:36.025042 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 23:46:36.067681 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 23:46:36.074157 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:46:36.074697 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:46:36.076226 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:46:36.078973 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:46:36.083752 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:46:36.085191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:46:36.085309 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:46:36.085466 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:46:36.088702 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:46:36.088980 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:46:36.091047 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:46:36.091267 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:46:36.093202 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:46:36.093467 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:46:36.100452 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:46:36.100684 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:46:36.110743 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:46:36.113262 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:46:36.116130 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:46:36.192432 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:46:36.192634 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:46:36.192770 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:46:36.193942 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:46:36.194183 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:46:36.196257 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:46:36.196496 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:46:36.204556 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:46:36.204866 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:46:36.208620 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:46:36.209036 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:46:36.218783 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:46:36.236164 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:46:36.239031 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:46:36.240868 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:46:36.241105 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:46:36.241306 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:46:36.241521 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:46:36.243250 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:46:36.243951 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:46:36.248589 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:46:36.248939 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:46:36.251542 systemd[1]: Finished ensure-sysext.service. Sep 4 23:46:36.255955 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:46:36.256297 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:46:36.261835 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:46:36.292751 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 23:46:36.303583 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:46:36.306974 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:46:36.330654 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Sep 4 23:46:36.330681 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Sep 4 23:46:36.339434 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:46:36.458492 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 23:46:36.458759 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 23:46:36.459822 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 23:46:36.460087 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Sep 4 23:46:36.460169 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Sep 4 23:46:36.464785 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:46:36.464799 systemd-tmpfiles[1358]: Skipping /boot Sep 4 23:46:36.481798 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:46:36.481819 systemd-tmpfiles[1358]: Skipping /boot Sep 4 23:46:36.549534 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:46:36.586518 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:46:36.707515 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 23:46:36.711207 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 23:46:36.717498 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:46:36.738947 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 23:46:36.745510 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 23:46:36.749808 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 23:46:36.901081 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 23:46:36.903136 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 23:46:36.907758 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 23:46:37.035794 augenrules[1395]: No rules Sep 4 23:46:37.038241 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:46:37.038577 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:46:37.061737 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 23:46:37.083541 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 23:46:37.176792 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 23:46:37.178505 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 23:46:37.185190 systemd-resolved[1372]: Positive Trust Anchors: Sep 4 23:46:37.185209 systemd-resolved[1372]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:46:37.185242 systemd-resolved[1372]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:46:37.190855 systemd-resolved[1372]: Defaulting to hostname 'linux'. Sep 4 23:46:37.193057 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:46:37.203177 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:46:37.255393 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 23:46:37.283635 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:46:37.288137 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 23:46:37.324060 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 23:46:37.339108 systemd-udevd[1405]: Using default interface naming scheme 'v255'. Sep 4 23:46:37.453509 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:46:37.493529 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:46:37.500740 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 23:46:37.533308 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 23:46:37.537406 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 23:46:37.575358 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1424) Sep 4 23:46:37.617340 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 4 23:46:37.628967 systemd-networkd[1427]: lo: Link UP Sep 4 23:46:37.628978 systemd-networkd[1427]: lo: Gained carrier Sep 4 23:46:37.630433 kernel: ACPI: button: Power Button [PWRF] Sep 4 23:46:37.634541 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 4 23:46:37.643163 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 4 23:46:37.643374 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 4 23:46:37.634727 systemd-networkd[1427]: Enumeration completed Sep 4 23:46:37.638597 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:46:37.638603 systemd-networkd[1427]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:46:37.648176 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 23:46:37.698630 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 4 23:46:37.698703 systemd-networkd[1427]: eth0: Link UP Sep 4 23:46:37.698709 systemd-networkd[1427]: eth0: Gained carrier Sep 4 23:46:37.698732 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:46:37.701516 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:46:37.703185 systemd[1]: Reached target network.target - Network. Sep 4 23:46:37.710348 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 23:46:37.711541 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 23:46:37.713435 systemd-networkd[1427]: eth0: DHCPv4 address 10.0.0.62/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 23:46:37.714821 systemd-timesyncd[1376]: Network configuration changed, trying to establish connection. Sep 4 23:46:38.511333 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 23:46:38.511368 systemd-timesyncd[1376]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 23:46:38.511408 systemd-timesyncd[1376]: Initial clock synchronization to Thu 2025-09-04 23:46:38.511290 UTC. Sep 4 23:46:38.512453 systemd-resolved[1372]: Clock change detected. Flushing caches. Sep 4 23:46:38.514039 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 23:46:38.554823 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:46:38.576775 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:46:38.577110 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:46:38.582161 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 23:46:38.589163 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 23:46:38.590945 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 23:46:38.611379 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:46:38.737387 kernel: kvm_amd: TSC scaling supported Sep 4 23:46:38.737505 kernel: kvm_amd: Nested Virtualization enabled Sep 4 23:46:38.737526 kernel: kvm_amd: Nested Paging enabled Sep 4 23:46:38.738844 kernel: kvm_amd: LBR virtualization supported Sep 4 23:46:38.738876 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 23:46:38.739644 kernel: kvm_amd: Virtual GIF supported Sep 4 23:46:38.790244 kernel: EDAC MC: Ver: 3.0.0 Sep 4 23:46:38.794912 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:46:38.829768 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 23:46:38.842481 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 23:46:38.868674 lvm[1459]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:46:38.912691 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 23:46:38.924035 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:46:38.925283 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:46:38.926591 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 23:46:38.927980 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 23:46:38.929562 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 23:46:38.930839 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 23:46:38.932194 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 23:46:38.933554 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 23:46:38.933583 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:46:38.934562 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:46:38.944479 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 23:46:38.947803 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 23:46:38.951967 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 23:46:38.954055 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 23:46:38.955303 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 23:46:38.962973 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 23:46:38.996309 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 23:46:38.998788 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 23:46:39.000572 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 23:46:39.017174 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:46:39.018238 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:46:39.019291 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:46:39.019327 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:46:39.020756 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 23:46:39.022877 lvm[1463]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:46:39.023368 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 23:46:39.026619 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 23:46:39.031350 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 23:46:39.034233 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 23:46:39.038404 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 23:46:39.043344 jq[1466]: false Sep 4 23:46:39.044307 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 23:46:39.055514 dbus-daemon[1465]: [system] SELinux support is enabled Sep 4 23:46:39.059184 extend-filesystems[1467]: Found loop3 Sep 4 23:46:39.059184 extend-filesystems[1467]: Found loop4 Sep 4 23:46:39.059184 extend-filesystems[1467]: Found loop5 Sep 4 23:46:39.059184 extend-filesystems[1467]: Found sr0 Sep 4 23:46:39.059184 extend-filesystems[1467]: Found vda Sep 4 23:46:39.059184 extend-filesystems[1467]: Found vda1 Sep 4 23:46:39.059184 extend-filesystems[1467]: Found vda2 Sep 4 23:46:39.059184 extend-filesystems[1467]: Found vda3 Sep 4 23:46:39.059184 extend-filesystems[1467]: Found usr Sep 4 23:46:39.059184 extend-filesystems[1467]: Found vda4 Sep 4 23:46:39.059184 extend-filesystems[1467]: Found vda6 Sep 4 23:46:39.059184 extend-filesystems[1467]: Found vda7 Sep 4 23:46:39.059184 extend-filesystems[1467]: Found vda9 Sep 4 23:46:39.059184 extend-filesystems[1467]: Checking size of /dev/vda9 Sep 4 23:46:39.058388 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 23:46:39.072304 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 23:46:39.078140 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 23:46:39.086227 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 23:46:39.087018 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 23:46:39.095519 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 23:46:39.102717 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 23:46:39.126195 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 23:46:39.131140 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 23:46:39.136650 extend-filesystems[1467]: Resized partition /dev/vda9 Sep 4 23:46:39.142427 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 23:46:39.143319 extend-filesystems[1489]: resize2fs 1.47.1 (20-May-2024) Sep 4 23:46:39.142790 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 23:46:39.143320 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 23:46:39.143652 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 23:46:39.180222 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1411) Sep 4 23:46:39.180329 jq[1485]: true Sep 4 23:46:39.181692 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 23:46:39.182020 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 23:46:39.184842 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 23:46:39.184892 update_engine[1484]: I20250904 23:46:39.184641 1484 main.cc:92] Flatcar Update Engine starting Sep 4 23:46:39.186619 update_engine[1484]: I20250904 23:46:39.186585 1484 update_check_scheduler.cc:74] Next update check in 11m21s Sep 4 23:46:39.201205 (ntainerd)[1493]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 23:46:39.204997 jq[1491]: true Sep 4 23:46:39.222615 systemd[1]: Started update-engine.service - Update Engine. Sep 4 23:46:39.225570 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 23:46:39.225604 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 23:46:39.227293 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 23:46:39.227313 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 23:46:39.235298 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 23:46:39.262992 sshd_keygen[1480]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 23:46:39.287215 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 23:46:39.300425 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 23:46:39.318486 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 23:46:39.318814 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 23:46:39.321753 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 23:46:39.432696 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 23:46:39.463657 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 23:46:39.469630 locksmithd[1513]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 23:46:39.474575 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 23:46:39.475734 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 23:46:39.479218 systemd-logind[1475]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 23:46:39.479249 systemd-logind[1475]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 23:46:39.480944 systemd-logind[1475]: New seat seat0. Sep 4 23:46:39.484603 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 23:46:39.494613 tar[1490]: linux-amd64/LICENSE Sep 4 23:46:39.495488 tar[1490]: linux-amd64/helm Sep 4 23:46:39.798161 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 23:46:40.554365 systemd-networkd[1427]: eth0: Gained IPv6LL Sep 4 23:46:40.558110 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 23:46:40.634812 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 23:46:40.643499 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 23:46:41.038949 extend-filesystems[1489]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 23:46:41.038949 extend-filesystems[1489]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 23:46:41.038949 extend-filesystems[1489]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 23:46:41.044288 extend-filesystems[1467]: Resized filesystem in /dev/vda9 Sep 4 23:46:41.052866 containerd[1493]: time="2025-09-04T23:46:41.039065697Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 4 23:46:41.051702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:46:41.055097 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 23:46:41.057559 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 23:46:41.057908 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 23:46:41.075798 containerd[1493]: time="2025-09-04T23:46:41.075741675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:46:41.077731 containerd[1493]: time="2025-09-04T23:46:41.077586083Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:46:41.077731 containerd[1493]: time="2025-09-04T23:46:41.077629855Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 23:46:41.077731 containerd[1493]: time="2025-09-04T23:46:41.077649452Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 23:46:41.077859 containerd[1493]: time="2025-09-04T23:46:41.077842965Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 23:46:41.077880 containerd[1493]: time="2025-09-04T23:46:41.077859626Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 23:46:41.077959 containerd[1493]: time="2025-09-04T23:46:41.077934677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:46:41.077959 containerd[1493]: time="2025-09-04T23:46:41.077953863Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:46:41.078241 containerd[1493]: time="2025-09-04T23:46:41.078219561Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:46:41.078288 containerd[1493]: time="2025-09-04T23:46:41.078252252Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 23:46:41.078288 containerd[1493]: time="2025-09-04T23:46:41.078266900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:46:41.078288 containerd[1493]: time="2025-09-04T23:46:41.078276738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 23:46:41.078397 containerd[1493]: time="2025-09-04T23:46:41.078379601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:46:41.078655 containerd[1493]: time="2025-09-04T23:46:41.078627175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:46:41.078822 containerd[1493]: time="2025-09-04T23:46:41.078797685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:46:41.078822 containerd[1493]: time="2025-09-04T23:46:41.078814005Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 23:46:41.078933 containerd[1493]: time="2025-09-04T23:46:41.078916047Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 23:46:41.078994 containerd[1493]: time="2025-09-04T23:46:41.078979586Z" level=info msg="metadata content store policy set" policy=shared Sep 4 23:46:41.092984 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 23:46:41.093337 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 23:46:41.103641 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 23:46:41.125077 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 23:46:41.180909 tar[1490]: linux-amd64/README.md Sep 4 23:46:41.197340 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 23:46:41.910943 bash[1527]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:46:41.913541 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 23:46:41.916362 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 23:46:41.988632 containerd[1493]: time="2025-09-04T23:46:41.988551733Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 23:46:41.988885 containerd[1493]: time="2025-09-04T23:46:41.988657953Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 23:46:41.988885 containerd[1493]: time="2025-09-04T23:46:41.988683581Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 23:46:41.988885 containerd[1493]: time="2025-09-04T23:46:41.988711703Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 23:46:41.988885 containerd[1493]: time="2025-09-04T23:46:41.988740497Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 23:46:41.989070 containerd[1493]: time="2025-09-04T23:46:41.989025622Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 23:46:41.989555 containerd[1493]: time="2025-09-04T23:46:41.989517905Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 23:46:41.989738 containerd[1493]: time="2025-09-04T23:46:41.989703774Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 23:46:41.989738 containerd[1493]: time="2025-09-04T23:46:41.989730764Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 23:46:41.989780 containerd[1493]: time="2025-09-04T23:46:41.989755551Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 23:46:41.989780 containerd[1493]: time="2025-09-04T23:46:41.989775548Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 23:46:41.989829 containerd[1493]: time="2025-09-04T23:46:41.989792710Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 23:46:41.989829 containerd[1493]: time="2025-09-04T23:46:41.989809672Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 23:46:41.989888 containerd[1493]: time="2025-09-04T23:46:41.989841261Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 23:46:41.989888 containerd[1493]: time="2025-09-04T23:46:41.989863303Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 23:46:41.989888 containerd[1493]: time="2025-09-04T23:46:41.989880525Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 23:46:41.989957 containerd[1493]: time="2025-09-04T23:46:41.989896615Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 23:46:41.989957 containerd[1493]: time="2025-09-04T23:46:41.989912264Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 23:46:41.989957 containerd[1493]: time="2025-09-04T23:46:41.989937201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 23:46:41.989957 containerd[1493]: time="2025-09-04T23:46:41.989955536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 23:46:41.990067 containerd[1493]: time="2025-09-04T23:46:41.989971666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 23:46:41.990067 containerd[1493]: time="2025-09-04T23:46:41.989987846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 23:46:41.990067 containerd[1493]: time="2025-09-04T23:46:41.990003836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 23:46:41.990067 containerd[1493]: time="2025-09-04T23:46:41.990021780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 23:46:41.990067 containerd[1493]: time="2025-09-04T23:46:41.990037920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 23:46:41.990067 containerd[1493]: time="2025-09-04T23:46:41.990053950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 23:46:41.990067 containerd[1493]: time="2025-09-04T23:46:41.990071333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 23:46:41.990267 containerd[1493]: time="2025-09-04T23:46:41.990100267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 23:46:41.990267 containerd[1493]: time="2025-09-04T23:46:41.990116327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 23:46:41.990267 containerd[1493]: time="2025-09-04T23:46:41.990153727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 23:46:41.990267 containerd[1493]: time="2025-09-04T23:46:41.990179966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 23:46:41.990267 containerd[1493]: time="2025-09-04T23:46:41.990199333Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 23:46:41.990267 containerd[1493]: time="2025-09-04T23:46:41.990224129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 23:46:41.990267 containerd[1493]: time="2025-09-04T23:46:41.990246792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 23:46:41.990267 containerd[1493]: time="2025-09-04T23:46:41.990261619Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 23:46:41.990487 containerd[1493]: time="2025-09-04T23:46:41.990328214Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 23:46:41.990487 containerd[1493]: time="2025-09-04T23:46:41.990353201Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 23:46:41.990487 containerd[1493]: time="2025-09-04T23:46:41.990418985Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 23:46:41.990487 containerd[1493]: time="2025-09-04T23:46:41.990440585Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 23:46:41.990487 containerd[1493]: time="2025-09-04T23:46:41.990453640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 23:46:41.990487 containerd[1493]: time="2025-09-04T23:46:41.990470281Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 23:46:41.990487 containerd[1493]: time="2025-09-04T23:46:41.990488916Z" level=info msg="NRI interface is disabled by configuration." Sep 4 23:46:41.990664 containerd[1493]: time="2025-09-04T23:46:41.990504204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 23:46:41.990970 containerd[1493]: time="2025-09-04T23:46:41.990892142Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 23:46:41.990970 containerd[1493]: time="2025-09-04T23:46:41.990958656Z" level=info msg="Connect containerd service" Sep 4 23:46:41.991158 containerd[1493]: time="2025-09-04T23:46:41.990998772Z" level=info msg="using legacy CRI server" Sep 4 23:46:41.991158 containerd[1493]: time="2025-09-04T23:46:41.991008921Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 23:46:41.991206 containerd[1493]: time="2025-09-04T23:46:41.991166356Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 23:46:41.991996 containerd[1493]: time="2025-09-04T23:46:41.991964452Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:46:41.992264 containerd[1493]: time="2025-09-04T23:46:41.992173324Z" level=info msg="Start subscribing containerd event" Sep 4 23:46:41.992300 containerd[1493]: time="2025-09-04T23:46:41.992265567Z" level=info msg="Start recovering state" Sep 4 23:46:41.992434 containerd[1493]: time="2025-09-04T23:46:41.992364222Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 23:46:41.992434 containerd[1493]: time="2025-09-04T23:46:41.992403005Z" level=info msg="Start event monitor" Sep 4 23:46:41.992434 containerd[1493]: time="2025-09-04T23:46:41.992418604Z" level=info msg="Start snapshots syncer" Sep 4 23:46:41.992434 containerd[1493]: time="2025-09-04T23:46:41.992430276Z" level=info msg="Start cni network conf syncer for default" Sep 4 23:46:41.992540 containerd[1493]: time="2025-09-04T23:46:41.992441527Z" level=info msg="Start streaming server" Sep 4 23:46:41.992540 containerd[1493]: time="2025-09-04T23:46:41.992446316Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 23:46:41.992578 containerd[1493]: time="2025-09-04T23:46:41.992542396Z" level=info msg="containerd successfully booted in 1.704259s" Sep 4 23:46:41.993676 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 23:46:43.055670 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:43.117228 (kubelet)[1579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:46:43.117950 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 23:46:43.119556 systemd[1]: Startup finished in 1.690s (kernel) + 11.860s (initrd) + 9.733s (userspace) = 23.284s. Sep 4 23:46:44.190781 kubelet[1579]: E0904 23:46:44.190711 1579 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:46:44.196280 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:46:44.196542 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:46:44.197073 systemd[1]: kubelet.service: Consumed 1.736s CPU time, 271.3M memory peak. Sep 4 23:46:48.963159 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 23:46:48.972387 systemd[1]: Started sshd@0-10.0.0.62:22-10.0.0.1:54120.service - OpenSSH per-connection server daemon (10.0.0.1:54120). Sep 4 23:46:49.028884 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 54120 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:46:49.031095 sshd-session[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:49.044689 systemd-logind[1475]: New session 1 of user core. Sep 4 23:46:49.046041 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 23:46:49.060576 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 23:46:49.080346 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 23:46:49.095678 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 23:46:49.099117 (systemd)[1597]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 23:46:49.102008 systemd-logind[1475]: New session c1 of user core. Sep 4 23:46:49.270275 systemd[1597]: Queued start job for default target default.target. Sep 4 23:46:49.285852 systemd[1597]: Created slice app.slice - User Application Slice. Sep 4 23:46:49.285884 systemd[1597]: Reached target paths.target - Paths. Sep 4 23:46:49.285936 systemd[1597]: Reached target timers.target - Timers. Sep 4 23:46:49.287988 systemd[1597]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 23:46:49.303073 systemd[1597]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 23:46:49.303304 systemd[1597]: Reached target sockets.target - Sockets. Sep 4 23:46:49.303370 systemd[1597]: Reached target basic.target - Basic System. Sep 4 23:46:49.303433 systemd[1597]: Reached target default.target - Main User Target. Sep 4 23:46:49.303475 systemd[1597]: Startup finished in 193ms. Sep 4 23:46:49.303896 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 23:46:49.320477 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 23:46:49.388099 systemd[1]: Started sshd@1-10.0.0.62:22-10.0.0.1:54134.service - OpenSSH per-connection server daemon (10.0.0.1:54134). Sep 4 23:46:49.454011 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 54134 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:46:49.456104 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:49.461629 systemd-logind[1475]: New session 2 of user core. Sep 4 23:46:49.471364 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 23:46:49.551644 sshd[1610]: Connection closed by 10.0.0.1 port 54134 Sep 4 23:46:49.552426 sshd-session[1608]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:49.571205 systemd[1]: sshd@1-10.0.0.62:22-10.0.0.1:54134.service: Deactivated successfully. Sep 4 23:46:49.573448 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 23:46:49.574497 systemd-logind[1475]: Session 2 logged out. Waiting for processes to exit. Sep 4 23:46:49.588963 systemd[1]: Started sshd@2-10.0.0.62:22-10.0.0.1:54138.service - OpenSSH per-connection server daemon (10.0.0.1:54138). Sep 4 23:46:49.590574 systemd-logind[1475]: Removed session 2. Sep 4 23:46:49.633560 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 54138 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:46:49.635514 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:49.640661 systemd-logind[1475]: New session 3 of user core. Sep 4 23:46:49.650600 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 23:46:49.705657 sshd[1618]: Connection closed by 10.0.0.1 port 54138 Sep 4 23:46:49.706147 sshd-session[1615]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:49.724286 systemd[1]: sshd@2-10.0.0.62:22-10.0.0.1:54138.service: Deactivated successfully. Sep 4 23:46:49.726173 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 23:46:49.727638 systemd-logind[1475]: Session 3 logged out. Waiting for processes to exit. Sep 4 23:46:49.739652 systemd[1]: Started sshd@3-10.0.0.62:22-10.0.0.1:54144.service - OpenSSH per-connection server daemon (10.0.0.1:54144). Sep 4 23:46:49.740878 systemd-logind[1475]: Removed session 3. Sep 4 23:46:49.783343 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 54144 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:46:49.785468 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:49.791610 systemd-logind[1475]: New session 4 of user core. Sep 4 23:46:49.801550 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 23:46:49.858785 sshd[1626]: Connection closed by 10.0.0.1 port 54144 Sep 4 23:46:49.859119 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:49.873985 systemd[1]: sshd@3-10.0.0.62:22-10.0.0.1:54144.service: Deactivated successfully. Sep 4 23:46:49.876338 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 23:46:49.878452 systemd-logind[1475]: Session 4 logged out. Waiting for processes to exit. Sep 4 23:46:49.889502 systemd[1]: Started sshd@4-10.0.0.62:22-10.0.0.1:54150.service - OpenSSH per-connection server daemon (10.0.0.1:54150). Sep 4 23:46:49.890658 systemd-logind[1475]: Removed session 4. Sep 4 23:46:49.936143 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 54150 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:46:49.937956 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:49.943401 systemd-logind[1475]: New session 5 of user core. Sep 4 23:46:49.952621 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 23:46:50.023572 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 23:46:50.025873 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:46:50.048356 sudo[1635]: pam_unix(sudo:session): session closed for user root Sep 4 23:46:50.050411 sshd[1634]: Connection closed by 10.0.0.1 port 54150 Sep 4 23:46:50.050906 sshd-session[1631]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:50.063256 systemd[1]: sshd@4-10.0.0.62:22-10.0.0.1:54150.service: Deactivated successfully. Sep 4 23:46:50.065490 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 23:46:50.067419 systemd-logind[1475]: Session 5 logged out. Waiting for processes to exit. Sep 4 23:46:50.078438 systemd[1]: Started sshd@5-10.0.0.62:22-10.0.0.1:43382.service - OpenSSH per-connection server daemon (10.0.0.1:43382). Sep 4 23:46:50.079438 systemd-logind[1475]: Removed session 5. Sep 4 23:46:50.119409 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 43382 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:46:50.121335 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:50.126336 systemd-logind[1475]: New session 6 of user core. Sep 4 23:46:50.137300 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 23:46:50.195529 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 23:46:50.195957 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:46:50.201568 sudo[1645]: pam_unix(sudo:session): session closed for user root Sep 4 23:46:50.208597 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 23:46:50.208935 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:46:50.237831 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:46:50.281532 augenrules[1667]: No rules Sep 4 23:46:50.283588 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:46:50.283896 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:46:50.285426 sudo[1644]: pam_unix(sudo:session): session closed for user root Sep 4 23:46:50.287274 sshd[1643]: Connection closed by 10.0.0.1 port 43382 Sep 4 23:46:50.287744 sshd-session[1640]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:50.305701 systemd[1]: sshd@5-10.0.0.62:22-10.0.0.1:43382.service: Deactivated successfully. Sep 4 23:46:50.307674 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 23:46:50.309551 systemd-logind[1475]: Session 6 logged out. Waiting for processes to exit. Sep 4 23:46:50.320483 systemd[1]: Started sshd@6-10.0.0.62:22-10.0.0.1:43396.service - OpenSSH per-connection server daemon (10.0.0.1:43396). Sep 4 23:46:50.321574 systemd-logind[1475]: Removed session 6. Sep 4 23:46:50.362532 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 43396 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:46:50.364271 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:50.369144 systemd-logind[1475]: New session 7 of user core. Sep 4 23:46:50.383365 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 23:46:50.439729 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 23:46:50.440079 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:46:51.125626 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 23:46:51.127536 (dockerd)[1699]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 23:46:51.997260 dockerd[1699]: time="2025-09-04T23:46:51.997015901Z" level=info msg="Starting up" Sep 4 23:46:54.447214 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 23:46:54.456564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:46:54.661969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:54.666625 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:46:54.712884 kubelet[1730]: E0904 23:46:54.712675 1730 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:46:54.721320 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:46:54.721606 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:46:54.722089 systemd[1]: kubelet.service: Consumed 269ms CPU time, 112.6M memory peak. Sep 4 23:46:56.823866 dockerd[1699]: time="2025-09-04T23:46:56.823735098Z" level=info msg="Loading containers: start." Sep 4 23:46:57.403153 kernel: Initializing XFRM netlink socket Sep 4 23:46:57.507565 systemd-networkd[1427]: docker0: Link UP Sep 4 23:46:58.110900 dockerd[1699]: time="2025-09-04T23:46:58.110835571Z" level=info msg="Loading containers: done." Sep 4 23:46:58.126328 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1994530066-merged.mount: Deactivated successfully. Sep 4 23:46:58.416505 dockerd[1699]: time="2025-09-04T23:46:58.416441212Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 23:46:58.416733 dockerd[1699]: time="2025-09-04T23:46:58.416579241Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 4 23:46:58.416766 dockerd[1699]: time="2025-09-04T23:46:58.416756403Z" level=info msg="Daemon has completed initialization" Sep 4 23:46:59.650467 dockerd[1699]: time="2025-09-04T23:46:59.650374747Z" level=info msg="API listen on /run/docker.sock" Sep 4 23:46:59.650646 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 23:47:01.229529 containerd[1493]: time="2025-09-04T23:47:01.229470591Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 4 23:47:04.912734 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 23:47:04.925474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:47:05.118243 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:47:05.122247 (kubelet)[1921]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:47:05.502432 kubelet[1921]: E0904 23:47:05.502376 1921 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:47:05.507210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:47:05.507462 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:47:05.507941 systemd[1]: kubelet.service: Consumed 254ms CPU time, 113.2M memory peak. Sep 4 23:47:07.784436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount967523017.mount: Deactivated successfully. Sep 4 23:47:13.439515 containerd[1493]: time="2025-09-04T23:47:13.439359656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:47:13.487055 containerd[1493]: time="2025-09-04T23:47:13.487008162Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 4 23:47:13.522170 containerd[1493]: time="2025-09-04T23:47:13.522084389Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:47:13.601603 containerd[1493]: time="2025-09-04T23:47:13.601521240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:47:13.603293 containerd[1493]: time="2025-09-04T23:47:13.603254913Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 12.373728336s" Sep 4 23:47:13.603373 containerd[1493]: time="2025-09-04T23:47:13.603300501Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 4 23:47:13.604871 containerd[1493]: time="2025-09-04T23:47:13.604832357Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 4 23:47:15.662848 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 23:47:15.680330 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:47:15.882778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:47:15.886869 (kubelet)[1991]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:47:17.220205 kubelet[1991]: E0904 23:47:17.220002 1991 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:47:17.225619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:47:17.225904 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:47:17.226476 systemd[1]: kubelet.service: Consumed 307ms CPU time, 108.6M memory peak. Sep 4 23:47:22.545956 containerd[1493]: time="2025-09-04T23:47:22.545879945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:47:22.671915 containerd[1493]: time="2025-09-04T23:47:22.671805056Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 4 23:47:22.783937 containerd[1493]: time="2025-09-04T23:47:22.783829628Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:47:22.950304 containerd[1493]: time="2025-09-04T23:47:22.950194679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:47:22.951808 containerd[1493]: time="2025-09-04T23:47:22.951733009Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 9.346859243s" Sep 4 23:47:22.951808 containerd[1493]: time="2025-09-04T23:47:22.951783665Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 4 23:47:22.953027 containerd[1493]: time="2025-09-04T23:47:22.952881149Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 4 23:47:24.863614 update_engine[1484]: I20250904 23:47:24.863435 1484 update_attempter.cc:509] Updating boot flags... Sep 4 23:47:24.976397 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2010) Sep 4 23:47:25.087171 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2014) Sep 4 23:47:25.136158 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2014) Sep 4 23:47:27.412583 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 4 23:47:27.425288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:47:27.596505 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:47:27.601360 (kubelet)[2027]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:47:27.661100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:47:28.193473 kubelet[2027]: E0904 23:47:27.657026 2027 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:47:27.661319 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:47:27.661761 systemd[1]: kubelet.service: Consumed 238ms CPU time, 110.8M memory peak. Sep 4 23:47:37.074712 containerd[1493]: time="2025-09-04T23:47:37.074626676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:47:37.162291 containerd[1493]: time="2025-09-04T23:47:37.162179547Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 4 23:47:37.337695 containerd[1493]: time="2025-09-04T23:47:37.337505083Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:47:37.476306 containerd[1493]: time="2025-09-04T23:47:37.476222071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:47:37.477703 containerd[1493]: time="2025-09-04T23:47:37.477634280Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 14.524705952s" Sep 4 23:47:37.477785 containerd[1493]: time="2025-09-04T23:47:37.477712208Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 4 23:47:37.478492 containerd[1493]: time="2025-09-04T23:47:37.478373834Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 4 23:47:37.662661 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 4 23:47:37.673322 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:47:37.838951 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:47:37.843227 (kubelet)[2048]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:47:39.515585 kubelet[2048]: E0904 23:47:39.515499 2048 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:47:39.520429 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:47:39.520645 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:47:39.521017 systemd[1]: kubelet.service: Consumed 239ms CPU time, 110.8M memory peak. Sep 4 23:47:48.354733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount995252770.mount: Deactivated successfully. Sep 4 23:47:49.662649 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 4 23:47:49.672494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:47:49.866588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:47:49.870727 (kubelet)[2069]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:47:49.981656 kubelet[2069]: E0904 23:47:49.981475 2069 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:47:49.987552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:47:49.987788 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:47:49.988217 systemd[1]: kubelet.service: Consumed 264ms CPU time, 112.8M memory peak. Sep 4 23:47:52.883218 containerd[1493]: time="2025-09-04T23:47:52.883091592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:47:52.942480 containerd[1493]: time="2025-09-04T23:47:52.942281773Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 4 23:47:52.997947 containerd[1493]: time="2025-09-04T23:47:52.997861313Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:47:53.036448 containerd[1493]: time="2025-09-04T23:47:53.036344359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:47:53.037493 containerd[1493]: time="2025-09-04T23:47:53.037419810Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 15.559005439s" Sep 4 23:47:53.037493 containerd[1493]: time="2025-09-04T23:47:53.037486194Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 4 23:47:53.038332 containerd[1493]: time="2025-09-04T23:47:53.038275456Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 4 23:47:58.016427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2825509578.mount: Deactivated successfully. Sep 4 23:48:00.162555 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Sep 4 23:48:00.177309 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:48:00.361352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:48:00.366951 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:48:00.415993 kubelet[2094]: E0904 23:48:00.415760 2094 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:48:00.421070 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:48:00.421357 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:48:00.421811 systemd[1]: kubelet.service: Consumed 240ms CPU time, 110.7M memory peak. Sep 4 23:48:02.846553 containerd[1493]: time="2025-09-04T23:48:02.846460095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:02.905830 containerd[1493]: time="2025-09-04T23:48:02.905687401Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 4 23:48:03.015876 containerd[1493]: time="2025-09-04T23:48:03.015746322Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:03.105075 containerd[1493]: time="2025-09-04T23:48:03.104887259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:03.106291 containerd[1493]: time="2025-09-04T23:48:03.106246571Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 10.067902104s" Sep 4 23:48:03.106360 containerd[1493]: time="2025-09-04T23:48:03.106290683Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 4 23:48:03.106803 containerd[1493]: time="2025-09-04T23:48:03.106774301Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 23:48:10.662869 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Sep 4 23:48:10.677497 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:48:10.863293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:48:10.881680 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:48:10.929155 kubelet[2159]: E0904 23:48:10.925952 2159 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:48:10.931356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:48:10.931626 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:48:10.932184 systemd[1]: kubelet.service: Consumed 246ms CPU time, 112.5M memory peak. Sep 4 23:48:13.308540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3140863259.mount: Deactivated successfully. Sep 4 23:48:13.771330 containerd[1493]: time="2025-09-04T23:48:13.771234560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:13.846579 containerd[1493]: time="2025-09-04T23:48:13.846479289Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 4 23:48:13.924614 containerd[1493]: time="2025-09-04T23:48:13.924521195Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:13.997695 containerd[1493]: time="2025-09-04T23:48:13.997619599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:13.998494 containerd[1493]: time="2025-09-04T23:48:13.998436502Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 10.891624239s" Sep 4 23:48:13.998623 containerd[1493]: time="2025-09-04T23:48:13.998498448Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 23:48:13.999149 containerd[1493]: time="2025-09-04T23:48:13.999112880Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 4 23:48:18.456432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3739335045.mount: Deactivated successfully. Sep 4 23:48:21.162570 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Sep 4 23:48:21.180408 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:48:21.361514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:48:21.366867 (kubelet)[2199]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:48:21.403891 kubelet[2199]: E0904 23:48:21.403825 2199 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:48:21.408892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:48:21.409156 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:48:21.409616 systemd[1]: kubelet.service: Consumed 221ms CPU time, 112.4M memory peak. Sep 4 23:48:23.115587 containerd[1493]: time="2025-09-04T23:48:23.115514103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:23.116578 containerd[1493]: time="2025-09-04T23:48:23.116530656Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 4 23:48:23.120776 containerd[1493]: time="2025-09-04T23:48:23.120731418Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:23.125924 containerd[1493]: time="2025-09-04T23:48:23.123878025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:48:23.127067 containerd[1493]: time="2025-09-04T23:48:23.127010043Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 9.127831429s" Sep 4 23:48:23.127162 containerd[1493]: time="2025-09-04T23:48:23.127074487Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 4 23:48:28.872282 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:48:28.878267 systemd[1]: kubelet.service: Consumed 221ms CPU time, 112.4M memory peak. Sep 4 23:48:28.897262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:48:28.980287 systemd[1]: Reload requested from client PID 2272 ('systemctl') (unit session-7.scope)... Sep 4 23:48:28.980322 systemd[1]: Reloading... Sep 4 23:48:29.417615 zram_generator::config[2319]: No configuration found. Sep 4 23:48:31.025352 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:48:31.164489 systemd[1]: Reloading finished in 2183 ms. Sep 4 23:48:31.226607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:48:31.230998 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:48:31.231986 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:48:31.232329 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:48:31.232373 systemd[1]: kubelet.service: Consumed 395ms CPU time, 98.4M memory peak. Sep 4 23:48:31.234290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:48:31.433471 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:48:31.452704 (kubelet)[2366]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:48:31.506341 kubelet[2366]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:48:31.506341 kubelet[2366]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:48:31.506341 kubelet[2366]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:48:31.506341 kubelet[2366]: I0904 23:48:31.506162 2366 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:48:31.821742 kubelet[2366]: I0904 23:48:31.821600 2366 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 4 23:48:31.821742 kubelet[2366]: I0904 23:48:31.821639 2366 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:48:31.821908 kubelet[2366]: I0904 23:48:31.821858 2366 server.go:956] "Client rotation is on, will bootstrap in background" Sep 4 23:48:31.846146 kubelet[2366]: I0904 23:48:31.846066 2366 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:48:31.852146 kubelet[2366]: E0904 23:48:31.851138 2366 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 4 23:48:31.858966 kubelet[2366]: E0904 23:48:31.858922 2366 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:48:31.858966 kubelet[2366]: I0904 23:48:31.858965 2366 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:48:31.866039 kubelet[2366]: I0904 23:48:31.865972 2366 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:48:31.866382 kubelet[2366]: I0904 23:48:31.866313 2366 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:48:31.866650 kubelet[2366]: I0904 23:48:31.866359 2366 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:48:31.866650 kubelet[2366]: I0904 23:48:31.866644 2366 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:48:31.866886 kubelet[2366]: I0904 23:48:31.866664 2366 container_manager_linux.go:303] "Creating device plugin manager" Sep 4 23:48:31.866886 kubelet[2366]: I0904 23:48:31.866869 2366 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:48:31.869978 kubelet[2366]: I0904 23:48:31.869939 2366 kubelet.go:480] "Attempting to sync node with API server" Sep 4 23:48:31.870051 kubelet[2366]: I0904 23:48:31.869985 2366 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:48:31.870051 kubelet[2366]: I0904 23:48:31.870022 2366 kubelet.go:386] "Adding apiserver pod source" Sep 4 23:48:31.870051 kubelet[2366]: I0904 23:48:31.870046 2366 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:48:31.880028 kubelet[2366]: E0904 23:48:31.879965 2366 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 4 23:48:31.880226 kubelet[2366]: E0904 23:48:31.880180 2366 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 4 23:48:31.880681 kubelet[2366]: I0904 23:48:31.880642 2366 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:48:31.881308 kubelet[2366]: I0904 23:48:31.881272 2366 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 4 23:48:31.882009 kubelet[2366]: W0904 23:48:31.881970 2366 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 23:48:31.885221 kubelet[2366]: I0904 23:48:31.885170 2366 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:48:31.885290 kubelet[2366]: I0904 23:48:31.885232 2366 server.go:1289] "Started kubelet" Sep 4 23:48:31.885642 kubelet[2366]: I0904 23:48:31.885460 2366 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:48:31.886733 kubelet[2366]: I0904 23:48:31.886229 2366 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:48:31.886864 kubelet[2366]: I0904 23:48:31.886822 2366 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:48:31.890843 kubelet[2366]: I0904 23:48:31.890807 2366 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:48:31.891554 kubelet[2366]: I0904 23:48:31.891496 2366 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:48:31.894513 kubelet[2366]: I0904 23:48:31.892804 2366 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:48:31.894513 kubelet[2366]: E0904 23:48:31.892931 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:48:31.894513 kubelet[2366]: I0904 23:48:31.893280 2366 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:48:31.894513 kubelet[2366]: I0904 23:48:31.893407 2366 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:48:31.894513 kubelet[2366]: I0904 23:48:31.893518 2366 server.go:317] "Adding debug handlers to kubelet server" Sep 4 23:48:31.894513 kubelet[2366]: E0904 23:48:31.893730 2366 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 4 23:48:31.894513 kubelet[2366]: E0904 23:48:31.893790 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="200ms" Sep 4 23:48:31.896347 kubelet[2366]: I0904 23:48:31.896291 2366 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:48:31.896347 kubelet[2366]: E0904 23:48:31.896348 2366 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:48:31.898965 kubelet[2366]: E0904 23:48:31.896841 2366 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.62:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.62:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623936bcb962bb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 23:48:31.885198011 +0000 UTC m=+0.427921858,LastTimestamp:2025-09-04 23:48:31.885198011 +0000 UTC m=+0.427921858,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 23:48:31.899425 kubelet[2366]: I0904 23:48:31.899255 2366 factory.go:223] Registration of the containerd container factory successfully Sep 4 23:48:31.899425 kubelet[2366]: I0904 23:48:31.899414 2366 factory.go:223] Registration of the systemd container factory successfully Sep 4 23:48:31.913644 kubelet[2366]: I0904 23:48:31.913612 2366 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:48:31.913644 kubelet[2366]: I0904 23:48:31.913632 2366 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:48:31.913644 kubelet[2366]: I0904 23:48:31.913652 2366 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:48:31.920054 kubelet[2366]: I0904 23:48:31.920017 2366 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 4 23:48:31.921692 kubelet[2366]: I0904 23:48:31.921663 2366 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 4 23:48:31.921692 kubelet[2366]: I0904 23:48:31.921685 2366 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 4 23:48:31.921775 kubelet[2366]: I0904 23:48:31.921709 2366 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:48:31.921775 kubelet[2366]: I0904 23:48:31.921733 2366 kubelet.go:2436] "Starting kubelet main sync loop" Sep 4 23:48:31.921827 kubelet[2366]: E0904 23:48:31.921773 2366 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:48:31.922373 kubelet[2366]: E0904 23:48:31.922347 2366 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 4 23:48:31.993582 kubelet[2366]: E0904 23:48:31.993485 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:48:32.022860 kubelet[2366]: E0904 23:48:32.022784 2366 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 23:48:32.094599 kubelet[2366]: E0904 23:48:32.094403 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:48:32.095339 kubelet[2366]: E0904 23:48:32.095300 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="400ms" Sep 4 23:48:32.194655 kubelet[2366]: E0904 23:48:32.194595 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:48:32.223980 kubelet[2366]: E0904 23:48:32.223890 2366 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 23:48:32.295454 kubelet[2366]: E0904 23:48:32.295386 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:48:32.395800 kubelet[2366]: E0904 23:48:32.395601 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:48:32.496272 kubelet[2366]: E0904 23:48:32.496204 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:48:32.496883 kubelet[2366]: E0904 23:48:32.496835 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="800ms" Sep 4 23:48:32.597404 kubelet[2366]: E0904 23:48:32.597331 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:48:32.624660 kubelet[2366]: E0904 23:48:32.624586 2366 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 23:48:32.698161 kubelet[2366]: E0904 23:48:32.698056 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:48:32.798649 kubelet[2366]: E0904 23:48:32.798577 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:48:32.899597 kubelet[2366]: E0904 23:48:32.899529 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:48:32.968992 kubelet[2366]: E0904 23:48:32.968818 2366 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 4 23:48:33.000558 kubelet[2366]: E0904 23:48:33.000486 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:48:33.101162 kubelet[2366]: E0904 23:48:33.101052 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:48:33.201689 kubelet[2366]: E0904 23:48:33.201610 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:48:33.258554 kubelet[2366]: E0904 23:48:33.258397 2366 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 4 23:48:33.284492 kubelet[2366]: E0904 23:48:33.284439 2366 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 4 23:48:33.297554 kubelet[2366]: E0904 23:48:33.297488 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="1.6s" Sep 4 23:48:33.302580 kubelet[2366]: E0904 23:48:33.302520 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:48:33.358762 kubelet[2366]: E0904 23:48:33.358687 2366 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 4 23:48:33.403065 kubelet[2366]: E0904 23:48:33.402976 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:48:33.411898 kubelet[2366]: I0904 23:48:33.411861 2366 policy_none.go:49] "None policy: Start" Sep 4 23:48:33.411898 kubelet[2366]: I0904 23:48:33.411903 2366 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:48:33.412007 kubelet[2366]: I0904 23:48:33.411921 2366 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:48:33.425239 kubelet[2366]: E0904 23:48:33.425166 2366 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 23:48:33.503765 kubelet[2366]: E0904 23:48:33.503707 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:48:33.565093 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 23:48:33.584553 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 23:48:33.604396 kubelet[2366]: E0904 23:48:33.604349 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:48:33.607607 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 23:48:33.609263 kubelet[2366]: E0904 23:48:33.609238 2366 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 4 23:48:33.609524 kubelet[2366]: I0904 23:48:33.609494 2366 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:48:33.609678 kubelet[2366]: I0904 23:48:33.609534 2366 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:48:33.609749 kubelet[2366]: I0904 23:48:33.609727 2366 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:48:33.610738 kubelet[2366]: E0904 23:48:33.610703 2366 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:48:33.610780 kubelet[2366]: E0904 23:48:33.610771 2366 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 23:48:33.712393 kubelet[2366]: I0904 23:48:33.712327 2366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:48:33.712880 kubelet[2366]: E0904 23:48:33.712831 2366 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Sep 4 23:48:33.897537 kubelet[2366]: E0904 23:48:33.897321 2366 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 4 23:48:33.914424 kubelet[2366]: I0904 23:48:33.914368 2366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:48:33.914812 kubelet[2366]: E0904 23:48:33.914771 2366 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Sep 4 23:48:34.317366 kubelet[2366]: I0904 23:48:34.317309 2366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:48:34.317860 kubelet[2366]: E0904 23:48:34.317794 2366 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Sep 4 23:48:34.630101 kubelet[2366]: E0904 23:48:34.629776 2366 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 4 23:48:34.899824 kubelet[2366]: E0904 23:48:34.899629 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="3.2s" Sep 4 23:48:35.092552 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 4 23:48:35.101435 kubelet[2366]: E0904 23:48:35.101341 2366 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:48:35.114752 kubelet[2366]: I0904 23:48:35.114666 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 4 23:48:35.114894 kubelet[2366]: I0904 23:48:35.114780 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53ffa337cc5c5d997b3ed3c06420ff77-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"53ffa337cc5c5d997b3ed3c06420ff77\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:48:35.115252 kubelet[2366]: I0904 23:48:35.115212 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53ffa337cc5c5d997b3ed3c06420ff77-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"53ffa337cc5c5d997b3ed3c06420ff77\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:48:35.115294 kubelet[2366]: I0904 23:48:35.115251 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53ffa337cc5c5d997b3ed3c06420ff77-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"53ffa337cc5c5d997b3ed3c06420ff77\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:48:35.121729 kubelet[2366]: I0904 23:48:35.121196 2366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:48:35.121729 kubelet[2366]: E0904 23:48:35.121631 2366 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Sep 4 23:48:35.139067 systemd[1]: Created slice kubepods-burstable-pod53ffa337cc5c5d997b3ed3c06420ff77.slice - libcontainer container kubepods-burstable-pod53ffa337cc5c5d997b3ed3c06420ff77.slice. Sep 4 23:48:35.142706 kubelet[2366]: E0904 23:48:35.142458 2366 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:48:35.210369 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 4 23:48:35.212527 kubelet[2366]: E0904 23:48:35.212493 2366 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:48:35.215704 kubelet[2366]: I0904 23:48:35.215655 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:48:35.215704 kubelet[2366]: I0904 23:48:35.215697 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:48:35.215881 kubelet[2366]: I0904 23:48:35.215719 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:48:35.215881 kubelet[2366]: I0904 23:48:35.215737 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:48:35.215881 kubelet[2366]: I0904 23:48:35.215794 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:48:35.402448 kubelet[2366]: E0904 23:48:35.402382 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:48:35.403491 containerd[1493]: time="2025-09-04T23:48:35.403429653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 4 23:48:35.444151 kubelet[2366]: E0904 23:48:35.444089 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:48:35.444896 containerd[1493]: time="2025-09-04T23:48:35.444834097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:53ffa337cc5c5d997b3ed3c06420ff77,Namespace:kube-system,Attempt:0,}" Sep 4 23:48:35.513813 kubelet[2366]: E0904 23:48:35.513618 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:48:35.514888 containerd[1493]: time="2025-09-04T23:48:35.514629866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 4 23:48:35.546116 kubelet[2366]: E0904 23:48:35.546047 2366 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 4 23:48:35.629813 kubelet[2366]: E0904 23:48:35.629704 2366 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 4 23:48:36.433190 kubelet[2366]: E0904 23:48:36.433101 2366 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 4 23:48:36.731101 kubelet[2366]: I0904 23:48:36.729050 2366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:48:36.731437 kubelet[2366]: E0904 23:48:36.731357 2366 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Sep 4 23:48:37.142826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1420627949.mount: Deactivated successfully. Sep 4 23:48:37.156446 containerd[1493]: time="2025-09-04T23:48:37.156357506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:48:37.159009 containerd[1493]: time="2025-09-04T23:48:37.158952605Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 23:48:37.166194 containerd[1493]: time="2025-09-04T23:48:37.165795887Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:48:37.168071 containerd[1493]: time="2025-09-04T23:48:37.167980022Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:48:37.168961 containerd[1493]: time="2025-09-04T23:48:37.168923662Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:48:37.169949 containerd[1493]: time="2025-09-04T23:48:37.169896277Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:48:37.171247 containerd[1493]: time="2025-09-04T23:48:37.171205363Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:48:37.172778 containerd[1493]: time="2025-09-04T23:48:37.172744389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:48:37.173576 containerd[1493]: time="2025-09-04T23:48:37.173518886Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.76995224s" Sep 4 23:48:37.180375 containerd[1493]: time="2025-09-04T23:48:37.180308504Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.665537529s" Sep 4 23:48:37.181481 containerd[1493]: time="2025-09-04T23:48:37.181438139Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.736485986s" Sep 4 23:48:37.485465 containerd[1493]: time="2025-09-04T23:48:37.485309586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:48:37.487719 containerd[1493]: time="2025-09-04T23:48:37.487564015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:48:37.487719 containerd[1493]: time="2025-09-04T23:48:37.487605063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:37.487931 containerd[1493]: time="2025-09-04T23:48:37.487717428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:37.489172 containerd[1493]: time="2025-09-04T23:48:37.488870838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:48:37.489172 containerd[1493]: time="2025-09-04T23:48:37.488948506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:48:37.489172 containerd[1493]: time="2025-09-04T23:48:37.488964687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:37.489172 containerd[1493]: time="2025-09-04T23:48:37.489073354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:37.490912 containerd[1493]: time="2025-09-04T23:48:37.490650321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:48:37.490912 containerd[1493]: time="2025-09-04T23:48:37.490717148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:48:37.490912 containerd[1493]: time="2025-09-04T23:48:37.490734231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:37.490912 containerd[1493]: time="2025-09-04T23:48:37.490853228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:37.541464 systemd[1]: Started cri-containerd-69d891b445f3c8cd626d31cf4b14d6cded8deccd44e48a30ce5c1caf083c1d9d.scope - libcontainer container 69d891b445f3c8cd626d31cf4b14d6cded8deccd44e48a30ce5c1caf083c1d9d. Sep 4 23:48:37.550663 systemd[1]: Started cri-containerd-199e3cb2d539f8d57c0d69826c23d4ef4c4ae0e414ff478da0b51d1ced942852.scope - libcontainer container 199e3cb2d539f8d57c0d69826c23d4ef4c4ae0e414ff478da0b51d1ced942852. Sep 4 23:48:37.556477 systemd[1]: Started cri-containerd-7d08f6566b9d39a518941005770cee25e66a6072ca68604fe942cf9fed79933f.scope - libcontainer container 7d08f6566b9d39a518941005770cee25e66a6072ca68604fe942cf9fed79933f. Sep 4 23:48:37.627995 containerd[1493]: time="2025-09-04T23:48:37.627922920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"199e3cb2d539f8d57c0d69826c23d4ef4c4ae0e414ff478da0b51d1ced942852\"" Sep 4 23:48:37.630066 kubelet[2366]: E0904 23:48:37.629713 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:48:37.634926 containerd[1493]: time="2025-09-04T23:48:37.634839802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:53ffa337cc5c5d997b3ed3c06420ff77,Namespace:kube-system,Attempt:0,} returns sandbox id \"69d891b445f3c8cd626d31cf4b14d6cded8deccd44e48a30ce5c1caf083c1d9d\"" Sep 4 23:48:37.635512 kubelet[2366]: E0904 23:48:37.635471 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:48:37.641434 containerd[1493]: time="2025-09-04T23:48:37.641390254Z" level=info msg="CreateContainer within sandbox \"199e3cb2d539f8d57c0d69826c23d4ef4c4ae0e414ff478da0b51d1ced942852\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 23:48:37.644620 containerd[1493]: time="2025-09-04T23:48:37.644514182Z" level=info msg="CreateContainer within sandbox \"69d891b445f3c8cd626d31cf4b14d6cded8deccd44e48a30ce5c1caf083c1d9d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 23:48:37.657991 containerd[1493]: time="2025-09-04T23:48:37.657854063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d08f6566b9d39a518941005770cee25e66a6072ca68604fe942cf9fed79933f\"" Sep 4 23:48:37.659102 kubelet[2366]: E0904 23:48:37.659072 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:48:37.668249 containerd[1493]: time="2025-09-04T23:48:37.667950578Z" level=info msg="CreateContainer within sandbox \"7d08f6566b9d39a518941005770cee25e66a6072ca68604fe942cf9fed79933f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 23:48:38.006292 containerd[1493]: time="2025-09-04T23:48:38.006204835Z" level=info msg="CreateContainer within sandbox \"199e3cb2d539f8d57c0d69826c23d4ef4c4ae0e414ff478da0b51d1ced942852\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"abab843893d0c0227898a934a5f4c33dddd34686111756b13ebfabbc2434530b\"" Sep 4 23:48:38.008886 containerd[1493]: time="2025-09-04T23:48:38.007990849Z" level=info msg="StartContainer for \"abab843893d0c0227898a934a5f4c33dddd34686111756b13ebfabbc2434530b\"" Sep 4 23:48:38.057546 containerd[1493]: time="2025-09-04T23:48:38.057473059Z" level=info msg="CreateContainer within sandbox \"69d891b445f3c8cd626d31cf4b14d6cded8deccd44e48a30ce5c1caf083c1d9d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4960c80dd9f4f563ead289698fcea2ffac0ff77cf8e77360394550cfbf1f57cc\"" Sep 4 23:48:38.058491 containerd[1493]: time="2025-09-04T23:48:38.058440764Z" level=info msg="StartContainer for \"4960c80dd9f4f563ead289698fcea2ffac0ff77cf8e77360394550cfbf1f57cc\"" Sep 4 23:48:38.079289 containerd[1493]: time="2025-09-04T23:48:38.078579265Z" level=info msg="CreateContainer within sandbox \"7d08f6566b9d39a518941005770cee25e66a6072ca68604fe942cf9fed79933f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eea8da9aa965de52cbb3cc50db85283b4feecf727964f2ff7f93002999c2062f\"" Sep 4 23:48:38.079701 containerd[1493]: time="2025-09-04T23:48:38.079668251Z" level=info msg="StartContainer for \"eea8da9aa965de52cbb3cc50db85283b4feecf727964f2ff7f93002999c2062f\"" Sep 4 23:48:38.087441 systemd[1]: Started cri-containerd-abab843893d0c0227898a934a5f4c33dddd34686111756b13ebfabbc2434530b.scope - libcontainer container abab843893d0c0227898a934a5f4c33dddd34686111756b13ebfabbc2434530b. Sep 4 23:48:38.103718 kubelet[2366]: E0904 23:48:38.103630 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="6.4s" Sep 4 23:48:38.132421 systemd[1]: Started cri-containerd-4960c80dd9f4f563ead289698fcea2ffac0ff77cf8e77360394550cfbf1f57cc.scope - libcontainer container 4960c80dd9f4f563ead289698fcea2ffac0ff77cf8e77360394550cfbf1f57cc. Sep 4 23:48:38.152464 kubelet[2366]: E0904 23:48:38.152412 2366 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 4 23:48:38.181651 systemd[1]: Started cri-containerd-eea8da9aa965de52cbb3cc50db85283b4feecf727964f2ff7f93002999c2062f.scope - libcontainer container eea8da9aa965de52cbb3cc50db85283b4feecf727964f2ff7f93002999c2062f. Sep 4 23:48:38.240999 containerd[1493]: time="2025-09-04T23:48:38.240950059Z" level=info msg="StartContainer for \"abab843893d0c0227898a934a5f4c33dddd34686111756b13ebfabbc2434530b\" returns successfully" Sep 4 23:48:38.292058 containerd[1493]: time="2025-09-04T23:48:38.291841195Z" level=info msg="StartContainer for \"4960c80dd9f4f563ead289698fcea2ffac0ff77cf8e77360394550cfbf1f57cc\" returns successfully" Sep 4 23:48:38.293451 containerd[1493]: time="2025-09-04T23:48:38.293046913Z" level=info msg="StartContainer for \"eea8da9aa965de52cbb3cc50db85283b4feecf727964f2ff7f93002999c2062f\" returns successfully" Sep 4 23:48:38.956387 kubelet[2366]: E0904 23:48:38.956333 2366 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:48:38.956942 kubelet[2366]: E0904 23:48:38.956471 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:48:38.959433 kubelet[2366]: E0904 23:48:38.959396 2366 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:48:38.959546 kubelet[2366]: E0904 23:48:38.959522 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:48:38.960381 kubelet[2366]: E0904 23:48:38.960355 2366 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:48:38.960458 kubelet[2366]: E0904 23:48:38.960436 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:48:39.885331 kubelet[2366]: I0904 23:48:39.884752 2366 apiserver.go:52] "Watching apiserver" Sep 4 23:48:39.893583 kubelet[2366]: I0904 23:48:39.893541 2366 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:48:39.933303 kubelet[2366]: I0904 23:48:39.933258 2366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:48:39.963496 kubelet[2366]: E0904 23:48:39.962932 2366 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:48:39.963496 kubelet[2366]: E0904 23:48:39.963027 2366 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:48:39.963496 kubelet[2366]: E0904 23:48:39.963187 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:48:39.963496 kubelet[2366]: E0904 23:48:39.963217 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:48:39.963496 kubelet[2366]: E0904 23:48:39.963259 2366 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:48:39.963496 kubelet[2366]: E0904 23:48:39.963425 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:48:40.562312 kubelet[2366]: I0904 23:48:40.562232 2366 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 23:48:40.562312 kubelet[2366]: E0904 23:48:40.562314 2366 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 4 23:48:40.593417 kubelet[2366]: I0904 23:48:40.593327 2366 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 23:48:40.963027 kubelet[2366]: I0904 23:48:40.962991 2366 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 23:48:40.963275 kubelet[2366]: I0904 23:48:40.963225 2366 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 23:48:41.000983 kubelet[2366]: E0904 23:48:41.000924 2366 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 23:48:41.000983 kubelet[2366]: I0904 23:48:41.000965 2366 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 23:48:41.004990 kubelet[2366]: E0904 23:48:41.004946 2366 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 23:48:41.005213 kubelet[2366]: E0904 23:48:41.005178 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:48:41.005372 kubelet[2366]: E0904 23:48:41.005343 2366 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 4 23:48:41.005509 kubelet[2366]: E0904 23:48:41.005485 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:48:41.008356 kubelet[2366]: E0904 23:48:41.008323 2366 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 4 23:48:41.008356 kubelet[2366]: I0904 23:48:41.008354 2366 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 23:48:41.016142 kubelet[2366]: E0904 23:48:41.016078 2366 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 4 23:48:41.965412 kubelet[2366]: I0904 23:48:41.965356 2366 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 23:48:43.448346 kubelet[2366]: I0904 23:48:43.448297 2366 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 23:48:44.592303 kubelet[2366]: E0904 23:48:44.590807 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:48:44.981290 kubelet[2366]: E0904 23:48:44.977929 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:48:45.312597 kubelet[2366]: E0904 23:48:45.311773 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:48:45.979670 kubelet[2366]: E0904 23:48:45.979618 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:48:48.817431 kubelet[2366]: I0904 23:48:48.817362 2366 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 23:48:49.128097 kubelet[2366]: E0904 23:48:49.127854 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:48:49.986348 kubelet[2366]: E0904 23:48:49.986284 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:48:49.990047 kubelet[2366]: I0904 23:48:49.987536 2366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=8.987509523 podStartE2EDuration="8.987509523s" podCreationTimestamp="2025-09-04 23:48:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:48:49.492736896 +0000 UTC m=+18.035460743" watchObservedRunningTime="2025-09-04 23:48:49.987509523 +0000 UTC m=+18.530233370" Sep 4 23:48:50.076270 kubelet[2366]: I0904 23:48:50.075977 2366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.075957993 podStartE2EDuration="7.075957993s" podCreationTimestamp="2025-09-04 23:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:48:49.988698521 +0000 UTC m=+18.531422368" watchObservedRunningTime="2025-09-04 23:48:50.075957993 +0000 UTC m=+18.618681840" Sep 4 23:48:51.865055 kubelet[2366]: E0904 23:48:51.864429 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:48:52.383170 kubelet[2366]: I0904 23:48:52.382571 2366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.382543474 podStartE2EDuration="4.382543474s" podCreationTimestamp="2025-09-04 23:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:48:50.07619854 +0000 UTC m=+18.618922407" watchObservedRunningTime="2025-09-04 23:48:52.382543474 +0000 UTC m=+20.925267321" Sep 4 23:49:03.493408 kubelet[2366]: E0904 23:49:03.493367 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:04.011817 kubelet[2366]: E0904 23:49:04.011779 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:04.392620 systemd[1]: Reload requested from client PID 2659 ('systemctl') (unit session-7.scope)... Sep 4 23:49:04.392647 systemd[1]: Reloading... Sep 4 23:49:04.497199 zram_generator::config[2703]: No configuration found. Sep 4 23:49:04.640342 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:49:04.771988 systemd[1]: Reloading finished in 378 ms. Sep 4 23:49:04.804652 kubelet[2366]: I0904 23:49:04.804584 2366 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:49:04.804710 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:49:04.824209 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:49:04.824640 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:49:04.824724 systemd[1]: kubelet.service: Consumed 2.153s CPU time, 134.7M memory peak. Sep 4 23:49:04.836436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:49:05.044997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:49:05.050616 (kubelet)[2748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:49:05.095976 kubelet[2748]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:49:05.095976 kubelet[2748]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:49:05.095976 kubelet[2748]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:49:05.096482 kubelet[2748]: I0904 23:49:05.096063 2748 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:49:05.103705 kubelet[2748]: I0904 23:49:05.103659 2748 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 4 23:49:05.103705 kubelet[2748]: I0904 23:49:05.103689 2748 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:49:05.103980 kubelet[2748]: I0904 23:49:05.103959 2748 server.go:956] "Client rotation is on, will bootstrap in background" Sep 4 23:49:05.105305 kubelet[2748]: I0904 23:49:05.105287 2748 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 4 23:49:05.107876 kubelet[2748]: I0904 23:49:05.107811 2748 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:49:05.113092 kubelet[2748]: E0904 23:49:05.113043 2748 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:49:05.113092 kubelet[2748]: I0904 23:49:05.113079 2748 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:49:05.118670 kubelet[2748]: I0904 23:49:05.118509 2748 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:49:05.119001 kubelet[2748]: I0904 23:49:05.118944 2748 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:49:05.119302 kubelet[2748]: I0904 23:49:05.119030 2748 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:49:05.119426 kubelet[2748]: I0904 23:49:05.119314 2748 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:49:05.119426 kubelet[2748]: I0904 23:49:05.119329 2748 container_manager_linux.go:303] "Creating device plugin manager" Sep 4 23:49:05.119426 kubelet[2748]: I0904 23:49:05.119396 2748 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:49:05.119648 kubelet[2748]: I0904 23:49:05.119629 2748 kubelet.go:480] "Attempting to sync node with API server" Sep 4 23:49:05.119648 kubelet[2748]: I0904 23:49:05.119646 2748 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:49:05.119745 kubelet[2748]: I0904 23:49:05.119673 2748 kubelet.go:386] "Adding apiserver pod source" Sep 4 23:49:05.119745 kubelet[2748]: I0904 23:49:05.119693 2748 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:49:05.124156 kubelet[2748]: I0904 23:49:05.122505 2748 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:49:05.124615 kubelet[2748]: I0904 23:49:05.124582 2748 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 4 23:49:05.128088 kubelet[2748]: I0904 23:49:05.128047 2748 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:49:05.128207 kubelet[2748]: I0904 23:49:05.128113 2748 server.go:1289] "Started kubelet" Sep 4 23:49:05.131182 kubelet[2748]: I0904 23:49:05.129415 2748 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:49:05.131182 kubelet[2748]: I0904 23:49:05.129871 2748 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:49:05.131182 kubelet[2748]: I0904 23:49:05.129956 2748 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:49:05.131182 kubelet[2748]: I0904 23:49:05.130201 2748 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:49:05.131872 kubelet[2748]: I0904 23:49:05.131846 2748 server.go:317] "Adding debug handlers to kubelet server" Sep 4 23:49:05.135176 kubelet[2748]: I0904 23:49:05.134625 2748 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:49:05.137022 kubelet[2748]: I0904 23:49:05.136995 2748 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:49:05.137399 kubelet[2748]: E0904 23:49:05.137373 2748 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:05.138855 kubelet[2748]: I0904 23:49:05.138839 2748 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:49:05.140461 kubelet[2748]: I0904 23:49:05.139640 2748 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:49:05.141810 kubelet[2748]: I0904 23:49:05.141790 2748 factory.go:223] Registration of the containerd container factory successfully Sep 4 23:49:05.141920 kubelet[2748]: I0904 23:49:05.141909 2748 factory.go:223] Registration of the systemd container factory successfully Sep 4 23:49:05.142108 kubelet[2748]: I0904 23:49:05.142069 2748 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:49:05.144621 kubelet[2748]: E0904 23:49:05.144531 2748 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:49:05.148892 kubelet[2748]: I0904 23:49:05.148836 2748 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 4 23:49:05.150175 kubelet[2748]: I0904 23:49:05.150157 2748 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 4 23:49:05.150175 kubelet[2748]: I0904 23:49:05.150174 2748 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 4 23:49:05.150296 kubelet[2748]: I0904 23:49:05.150194 2748 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:49:05.150296 kubelet[2748]: I0904 23:49:05.150204 2748 kubelet.go:2436] "Starting kubelet main sync loop" Sep 4 23:49:05.150296 kubelet[2748]: E0904 23:49:05.150245 2748 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:49:05.185940 kubelet[2748]: I0904 23:49:05.185901 2748 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:49:05.185940 kubelet[2748]: I0904 23:49:05.185921 2748 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:49:05.185940 kubelet[2748]: I0904 23:49:05.185943 2748 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:49:05.186214 kubelet[2748]: I0904 23:49:05.186103 2748 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 23:49:05.186214 kubelet[2748]: I0904 23:49:05.186116 2748 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 23:49:05.186214 kubelet[2748]: I0904 23:49:05.186182 2748 policy_none.go:49] "None policy: Start" Sep 4 23:49:05.186214 kubelet[2748]: I0904 23:49:05.186203 2748 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:49:05.186335 kubelet[2748]: I0904 23:49:05.186228 2748 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:49:05.186381 kubelet[2748]: I0904 23:49:05.186348 2748 state_mem.go:75] "Updated machine memory state" Sep 4 23:49:05.193682 kubelet[2748]: E0904 23:49:05.193638 2748 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 4 23:49:05.193873 kubelet[2748]: I0904 23:49:05.193841 2748 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:49:05.193873 kubelet[2748]: I0904 23:49:05.193860 2748 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:49:05.194810 kubelet[2748]: I0904 23:49:05.194157 2748 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:49:05.195306 kubelet[2748]: E0904 23:49:05.195275 2748 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:49:05.252187 kubelet[2748]: I0904 23:49:05.252024 2748 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 23:49:05.252187 kubelet[2748]: I0904 23:49:05.252064 2748 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 23:49:05.252457 kubelet[2748]: I0904 23:49:05.252252 2748 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 23:49:05.300697 kubelet[2748]: I0904 23:49:05.300561 2748 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:49:05.340687 kubelet[2748]: I0904 23:49:05.340615 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:49:05.340687 kubelet[2748]: I0904 23:49:05.340672 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:49:05.340901 kubelet[2748]: I0904 23:49:05.340752 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53ffa337cc5c5d997b3ed3c06420ff77-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"53ffa337cc5c5d997b3ed3c06420ff77\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:49:05.340901 kubelet[2748]: I0904 23:49:05.340817 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:49:05.340901 kubelet[2748]: I0904 23:49:05.340847 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:49:05.340901 kubelet[2748]: I0904 23:49:05.340867 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 4 23:49:05.340901 kubelet[2748]: I0904 23:49:05.340887 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53ffa337cc5c5d997b3ed3c06420ff77-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"53ffa337cc5c5d997b3ed3c06420ff77\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:49:05.341106 kubelet[2748]: I0904 23:49:05.340936 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53ffa337cc5c5d997b3ed3c06420ff77-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"53ffa337cc5c5d997b3ed3c06420ff77\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:49:05.341106 kubelet[2748]: I0904 23:49:05.340962 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:49:05.535899 kubelet[2748]: E0904 23:49:05.534654 2748 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 23:49:05.535899 kubelet[2748]: E0904 23:49:05.534900 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:05.535899 kubelet[2748]: E0904 23:49:05.535156 2748 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 4 23:49:05.535899 kubelet[2748]: E0904 23:49:05.535233 2748 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 4 23:49:05.535899 kubelet[2748]: E0904 23:49:05.535335 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:05.535899 kubelet[2748]: E0904 23:49:05.535733 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:05.537032 kubelet[2748]: I0904 23:49:05.537005 2748 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 4 23:49:05.537096 kubelet[2748]: I0904 23:49:05.537070 2748 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 23:49:06.120601 kubelet[2748]: I0904 23:49:06.120545 2748 apiserver.go:52] "Watching apiserver" Sep 4 23:49:06.139885 kubelet[2748]: I0904 23:49:06.139826 2748 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:49:06.165578 kubelet[2748]: I0904 23:49:06.164699 2748 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 23:49:06.165578 kubelet[2748]: E0904 23:49:06.164815 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:06.165578 kubelet[2748]: E0904 23:49:06.164852 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:06.921968 kubelet[2748]: E0904 23:49:06.921904 2748 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 23:49:06.922219 kubelet[2748]: E0904 23:49:06.922195 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:07.167603 kubelet[2748]: E0904 23:49:07.166699 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:07.167603 kubelet[2748]: E0904 23:49:07.166748 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:11.347378 kubelet[2748]: I0904 23:49:11.347330 2748 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 23:49:11.347983 containerd[1493]: time="2025-09-04T23:49:11.347797702Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 23:49:11.348368 kubelet[2748]: I0904 23:49:11.348016 2748 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 23:49:12.136568 kubelet[2748]: E0904 23:49:12.136527 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:12.175661 kubelet[2748]: E0904 23:49:12.175632 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:12.229095 kubelet[2748]: E0904 23:49:12.228989 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:13.178016 kubelet[2748]: E0904 23:49:13.177960 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:14.180273 kubelet[2748]: E0904 23:49:14.180232 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:14.702902 kubelet[2748]: E0904 23:49:14.699769 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:15.181923 kubelet[2748]: E0904 23:49:15.181865 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:25.782005 systemd[1]: Created slice kubepods-besteffort-pod9cb721e8_2260_489f_af4a_fd99d76ca2f5.slice - libcontainer container kubepods-besteffort-pod9cb721e8_2260_489f_af4a_fd99d76ca2f5.slice. Sep 4 23:49:25.869970 kubelet[2748]: I0904 23:49:25.869885 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j67p\" (UniqueName: \"kubernetes.io/projected/9cb721e8-2260-489f-af4a-fd99d76ca2f5-kube-api-access-6j67p\") pod \"kube-proxy-7sbdh\" (UID: \"9cb721e8-2260-489f-af4a-fd99d76ca2f5\") " pod="kube-system/kube-proxy-7sbdh" Sep 4 23:49:25.869970 kubelet[2748]: I0904 23:49:25.869957 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9cb721e8-2260-489f-af4a-fd99d76ca2f5-kube-proxy\") pod \"kube-proxy-7sbdh\" (UID: \"9cb721e8-2260-489f-af4a-fd99d76ca2f5\") " pod="kube-system/kube-proxy-7sbdh" Sep 4 23:49:25.869970 kubelet[2748]: I0904 23:49:25.869990 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cb721e8-2260-489f-af4a-fd99d76ca2f5-xtables-lock\") pod \"kube-proxy-7sbdh\" (UID: \"9cb721e8-2260-489f-af4a-fd99d76ca2f5\") " pod="kube-system/kube-proxy-7sbdh" Sep 4 23:49:25.870629 kubelet[2748]: I0904 23:49:25.870047 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cb721e8-2260-489f-af4a-fd99d76ca2f5-lib-modules\") pod \"kube-proxy-7sbdh\" (UID: \"9cb721e8-2260-489f-af4a-fd99d76ca2f5\") " pod="kube-system/kube-proxy-7sbdh" Sep 4 23:49:25.909050 systemd[1]: Created slice kubepods-besteffort-pod22ac45a3_a292_4e07_9559_1f29349035f6.slice - libcontainer container kubepods-besteffort-pod22ac45a3_a292_4e07_9559_1f29349035f6.slice. Sep 4 23:49:25.971398 kubelet[2748]: I0904 23:49:25.970749 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/22ac45a3-a292-4e07-9559-1f29349035f6-var-lib-calico\") pod \"tigera-operator-755d956888-vzksq\" (UID: \"22ac45a3-a292-4e07-9559-1f29349035f6\") " pod="tigera-operator/tigera-operator-755d956888-vzksq" Sep 4 23:49:25.971398 kubelet[2748]: I0904 23:49:25.970896 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9w7s\" (UniqueName: \"kubernetes.io/projected/22ac45a3-a292-4e07-9559-1f29349035f6-kube-api-access-d9w7s\") pod \"tigera-operator-755d956888-vzksq\" (UID: \"22ac45a3-a292-4e07-9559-1f29349035f6\") " pod="tigera-operator/tigera-operator-755d956888-vzksq" Sep 4 23:49:26.097058 kubelet[2748]: E0904 23:49:26.096817 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:26.097894 containerd[1493]: time="2025-09-04T23:49:26.097771877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7sbdh,Uid:9cb721e8-2260-489f-af4a-fd99d76ca2f5,Namespace:kube-system,Attempt:0,}" Sep 4 23:49:26.140036 containerd[1493]: time="2025-09-04T23:49:26.139241305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:49:26.140036 containerd[1493]: time="2025-09-04T23:49:26.139921579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:49:26.140036 containerd[1493]: time="2025-09-04T23:49:26.139997011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:26.140327 containerd[1493]: time="2025-09-04T23:49:26.140195305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:26.169540 systemd[1]: Started cri-containerd-3aa593d824699b71fbab4b34a56f626ec7f27a7117db965002d37e0ec0c534f9.scope - libcontainer container 3aa593d824699b71fbab4b34a56f626ec7f27a7117db965002d37e0ec0c534f9. Sep 4 23:49:26.187504 kubelet[2748]: E0904 23:49:26.187442 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:26.200630 containerd[1493]: time="2025-09-04T23:49:26.200570967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7sbdh,Uid:9cb721e8-2260-489f-af4a-fd99d76ca2f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3aa593d824699b71fbab4b34a56f626ec7f27a7117db965002d37e0ec0c534f9\"" Sep 4 23:49:26.201507 kubelet[2748]: E0904 23:49:26.201466 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:26.215645 containerd[1493]: time="2025-09-04T23:49:26.215583192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-vzksq,Uid:22ac45a3-a292-4e07-9559-1f29349035f6,Namespace:tigera-operator,Attempt:0,}" Sep 4 23:49:26.388554 containerd[1493]: time="2025-09-04T23:49:26.386427245Z" level=info msg="CreateContainer within sandbox \"3aa593d824699b71fbab4b34a56f626ec7f27a7117db965002d37e0ec0c534f9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 23:49:26.784795 containerd[1493]: time="2025-09-04T23:49:26.784728371Z" level=info msg="CreateContainer within sandbox \"3aa593d824699b71fbab4b34a56f626ec7f27a7117db965002d37e0ec0c534f9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b965490a8311dd8d18ed5d656d2824f0895aed7da6eada7bb91f51521c93705a\"" Sep 4 23:49:26.785604 containerd[1493]: time="2025-09-04T23:49:26.785571582Z" level=info msg="StartContainer for \"b965490a8311dd8d18ed5d656d2824f0895aed7da6eada7bb91f51521c93705a\"" Sep 4 23:49:26.796780 containerd[1493]: time="2025-09-04T23:49:26.796653582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:49:26.796780 containerd[1493]: time="2025-09-04T23:49:26.796734305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:49:26.796780 containerd[1493]: time="2025-09-04T23:49:26.796770322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:26.797028 containerd[1493]: time="2025-09-04T23:49:26.796872695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:26.819394 systemd[1]: Started cri-containerd-7a646b4b08efaaebc6e818a07e9be75d4ddbda4e24a4cb3084b03a7eace2b77a.scope - libcontainer container 7a646b4b08efaaebc6e818a07e9be75d4ddbda4e24a4cb3084b03a7eace2b77a. Sep 4 23:49:26.823230 systemd[1]: Started cri-containerd-b965490a8311dd8d18ed5d656d2824f0895aed7da6eada7bb91f51521c93705a.scope - libcontainer container b965490a8311dd8d18ed5d656d2824f0895aed7da6eada7bb91f51521c93705a. Sep 4 23:49:26.872564 containerd[1493]: time="2025-09-04T23:49:26.872511738Z" level=info msg="StartContainer for \"b965490a8311dd8d18ed5d656d2824f0895aed7da6eada7bb91f51521c93705a\" returns successfully" Sep 4 23:49:26.879962 containerd[1493]: time="2025-09-04T23:49:26.879776485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-vzksq,Uid:22ac45a3-a292-4e07-9559-1f29349035f6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7a646b4b08efaaebc6e818a07e9be75d4ddbda4e24a4cb3084b03a7eace2b77a\"" Sep 4 23:49:26.882355 containerd[1493]: time="2025-09-04T23:49:26.882047179Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 4 23:49:27.204966 kubelet[2748]: E0904 23:49:27.204919 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:27.218781 kubelet[2748]: I0904 23:49:27.218691 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7sbdh" podStartSLOduration=15.218666712 podStartE2EDuration="15.218666712s" podCreationTimestamp="2025-09-04 23:49:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:49:27.21854319 +0000 UTC m=+22.163627094" watchObservedRunningTime="2025-09-04 23:49:27.218666712 +0000 UTC m=+22.163750626" Sep 4 23:49:28.224815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1750027937.mount: Deactivated successfully. Sep 4 23:49:29.555705 containerd[1493]: time="2025-09-04T23:49:29.555376500Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:29.557722 containerd[1493]: time="2025-09-04T23:49:29.557625082Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 4 23:49:29.562432 containerd[1493]: time="2025-09-04T23:49:29.562314758Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:29.566798 containerd[1493]: time="2025-09-04T23:49:29.566633625Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:29.567869 containerd[1493]: time="2025-09-04T23:49:29.567589017Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.685239147s" Sep 4 23:49:29.567869 containerd[1493]: time="2025-09-04T23:49:29.567649662Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 4 23:49:29.577214 containerd[1493]: time="2025-09-04T23:49:29.576960485Z" level=info msg="CreateContainer within sandbox \"7a646b4b08efaaebc6e818a07e9be75d4ddbda4e24a4cb3084b03a7eace2b77a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 23:49:29.602697 containerd[1493]: time="2025-09-04T23:49:29.602592999Z" level=info msg="CreateContainer within sandbox \"7a646b4b08efaaebc6e818a07e9be75d4ddbda4e24a4cb3084b03a7eace2b77a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dafdaa7769d5ff7bebcd8c40210cd518ec171edf881dfd0d7e23ff1f4b6be543\"" Sep 4 23:49:29.603342 containerd[1493]: time="2025-09-04T23:49:29.603315772Z" level=info msg="StartContainer for \"dafdaa7769d5ff7bebcd8c40210cd518ec171edf881dfd0d7e23ff1f4b6be543\"" Sep 4 23:49:29.645405 systemd[1]: Started cri-containerd-dafdaa7769d5ff7bebcd8c40210cd518ec171edf881dfd0d7e23ff1f4b6be543.scope - libcontainer container dafdaa7769d5ff7bebcd8c40210cd518ec171edf881dfd0d7e23ff1f4b6be543. Sep 4 23:49:29.681032 containerd[1493]: time="2025-09-04T23:49:29.680829753Z" level=info msg="StartContainer for \"dafdaa7769d5ff7bebcd8c40210cd518ec171edf881dfd0d7e23ff1f4b6be543\" returns successfully" Sep 4 23:49:36.830353 sudo[1679]: pam_unix(sudo:session): session closed for user root Sep 4 23:49:36.832507 sshd[1678]: Connection closed by 10.0.0.1 port 43396 Sep 4 23:49:36.835815 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:36.841950 systemd-logind[1475]: Session 7 logged out. Waiting for processes to exit. Sep 4 23:49:36.842806 systemd[1]: sshd@6-10.0.0.62:22-10.0.0.1:43396.service: Deactivated successfully. Sep 4 23:49:36.849853 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 23:49:36.850584 systemd[1]: session-7.scope: Consumed 8.908s CPU time, 215.5M memory peak. Sep 4 23:49:36.853611 systemd-logind[1475]: Removed session 7. Sep 4 23:49:40.540580 kubelet[2748]: I0904 23:49:40.540490 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-vzksq" podStartSLOduration=25.85335407 podStartE2EDuration="28.540467542s" podCreationTimestamp="2025-09-04 23:49:12 +0000 UTC" firstStartedPulling="2025-09-04 23:49:26.881615926 +0000 UTC m=+21.826699830" lastFinishedPulling="2025-09-04 23:49:29.568729398 +0000 UTC m=+24.513813302" observedRunningTime="2025-09-04 23:49:30.353284578 +0000 UTC m=+25.298368492" watchObservedRunningTime="2025-09-04 23:49:40.540467542 +0000 UTC m=+35.485551446" Sep 4 23:49:40.555340 systemd[1]: Created slice kubepods-besteffort-podc0260609_a8c9_4e23_a707_2faee6934c9d.slice - libcontainer container kubepods-besteffort-podc0260609_a8c9_4e23_a707_2faee6934c9d.slice. Sep 4 23:49:40.567114 kubelet[2748]: I0904 23:49:40.566646 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cz4x\" (UniqueName: \"kubernetes.io/projected/c0260609-a8c9-4e23-a707-2faee6934c9d-kube-api-access-2cz4x\") pod \"calico-typha-7cffdd7445-wbxs2\" (UID: \"c0260609-a8c9-4e23-a707-2faee6934c9d\") " pod="calico-system/calico-typha-7cffdd7445-wbxs2" Sep 4 23:49:40.567114 kubelet[2748]: I0904 23:49:40.566685 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0260609-a8c9-4e23-a707-2faee6934c9d-tigera-ca-bundle\") pod \"calico-typha-7cffdd7445-wbxs2\" (UID: \"c0260609-a8c9-4e23-a707-2faee6934c9d\") " pod="calico-system/calico-typha-7cffdd7445-wbxs2" Sep 4 23:49:40.567114 kubelet[2748]: I0904 23:49:40.566714 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c0260609-a8c9-4e23-a707-2faee6934c9d-typha-certs\") pod \"calico-typha-7cffdd7445-wbxs2\" (UID: \"c0260609-a8c9-4e23-a707-2faee6934c9d\") " pod="calico-system/calico-typha-7cffdd7445-wbxs2" Sep 4 23:49:41.453017 systemd[1]: Created slice kubepods-besteffort-pod8633afc7_7cbd_43c9_a21b_20406480e7ab.slice - libcontainer container kubepods-besteffort-pod8633afc7_7cbd_43c9_a21b_20406480e7ab.slice. Sep 4 23:49:41.464169 kubelet[2748]: E0904 23:49:41.461609 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:41.466417 containerd[1493]: time="2025-09-04T23:49:41.466347531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cffdd7445-wbxs2,Uid:c0260609-a8c9-4e23-a707-2faee6934c9d,Namespace:calico-system,Attempt:0,}" Sep 4 23:49:41.472862 kubelet[2748]: I0904 23:49:41.472793 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8633afc7-7cbd-43c9-a21b-20406480e7ab-tigera-ca-bundle\") pod \"calico-node-wnjtd\" (UID: \"8633afc7-7cbd-43c9-a21b-20406480e7ab\") " pod="calico-system/calico-node-wnjtd" Sep 4 23:49:41.472862 kubelet[2748]: I0904 23:49:41.472867 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8633afc7-7cbd-43c9-a21b-20406480e7ab-policysync\") pod \"calico-node-wnjtd\" (UID: \"8633afc7-7cbd-43c9-a21b-20406480e7ab\") " pod="calico-system/calico-node-wnjtd" Sep 4 23:49:41.473061 kubelet[2748]: I0904 23:49:41.472901 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmxhm\" (UniqueName: \"kubernetes.io/projected/8633afc7-7cbd-43c9-a21b-20406480e7ab-kube-api-access-vmxhm\") pod \"calico-node-wnjtd\" (UID: \"8633afc7-7cbd-43c9-a21b-20406480e7ab\") " pod="calico-system/calico-node-wnjtd" Sep 4 23:49:41.473061 kubelet[2748]: I0904 23:49:41.472930 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8633afc7-7cbd-43c9-a21b-20406480e7ab-cni-log-dir\") pod \"calico-node-wnjtd\" (UID: \"8633afc7-7cbd-43c9-a21b-20406480e7ab\") " pod="calico-system/calico-node-wnjtd" Sep 4 23:49:41.473061 kubelet[2748]: I0904 23:49:41.472953 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8633afc7-7cbd-43c9-a21b-20406480e7ab-cni-net-dir\") pod \"calico-node-wnjtd\" (UID: \"8633afc7-7cbd-43c9-a21b-20406480e7ab\") " pod="calico-system/calico-node-wnjtd" Sep 4 23:49:41.473061 kubelet[2748]: I0904 23:49:41.472976 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8633afc7-7cbd-43c9-a21b-20406480e7ab-var-run-calico\") pod \"calico-node-wnjtd\" (UID: \"8633afc7-7cbd-43c9-a21b-20406480e7ab\") " pod="calico-system/calico-node-wnjtd" Sep 4 23:49:41.473061 kubelet[2748]: I0904 23:49:41.472995 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8633afc7-7cbd-43c9-a21b-20406480e7ab-cni-bin-dir\") pod \"calico-node-wnjtd\" (UID: \"8633afc7-7cbd-43c9-a21b-20406480e7ab\") " pod="calico-system/calico-node-wnjtd" Sep 4 23:49:41.473230 kubelet[2748]: I0904 23:49:41.473015 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8633afc7-7cbd-43c9-a21b-20406480e7ab-lib-modules\") pod \"calico-node-wnjtd\" (UID: \"8633afc7-7cbd-43c9-a21b-20406480e7ab\") " pod="calico-system/calico-node-wnjtd" Sep 4 23:49:41.473230 kubelet[2748]: I0904 23:49:41.473036 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8633afc7-7cbd-43c9-a21b-20406480e7ab-node-certs\") pod \"calico-node-wnjtd\" (UID: \"8633afc7-7cbd-43c9-a21b-20406480e7ab\") " pod="calico-system/calico-node-wnjtd" Sep 4 23:49:41.473230 kubelet[2748]: I0904 23:49:41.473056 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8633afc7-7cbd-43c9-a21b-20406480e7ab-xtables-lock\") pod \"calico-node-wnjtd\" (UID: \"8633afc7-7cbd-43c9-a21b-20406480e7ab\") " pod="calico-system/calico-node-wnjtd" Sep 4 23:49:41.473301 kubelet[2748]: I0904 23:49:41.473243 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8633afc7-7cbd-43c9-a21b-20406480e7ab-var-lib-calico\") pod \"calico-node-wnjtd\" (UID: \"8633afc7-7cbd-43c9-a21b-20406480e7ab\") " pod="calico-system/calico-node-wnjtd" Sep 4 23:49:41.474456 kubelet[2748]: I0904 23:49:41.473537 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8633afc7-7cbd-43c9-a21b-20406480e7ab-flexvol-driver-host\") pod \"calico-node-wnjtd\" (UID: \"8633afc7-7cbd-43c9-a21b-20406480e7ab\") " pod="calico-system/calico-node-wnjtd" Sep 4 23:49:41.492190 kubelet[2748]: E0904 23:49:41.492078 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-28pvw" podUID="a98fe27d-f9b2-46e3-a9a5-1f9fa577590e" Sep 4 23:49:41.519261 containerd[1493]: time="2025-09-04T23:49:41.518476144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:49:41.519261 containerd[1493]: time="2025-09-04T23:49:41.519175803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:49:41.519261 containerd[1493]: time="2025-09-04T23:49:41.519196412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:41.519650 containerd[1493]: time="2025-09-04T23:49:41.519599301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:41.555632 systemd[1]: Started cri-containerd-7d8cfaa90904c4d1d1744bf862243d0e1aa09d5a712a1a6a33dc0c4acb0da205.scope - libcontainer container 7d8cfaa90904c4d1d1744bf862243d0e1aa09d5a712a1a6a33dc0c4acb0da205. Sep 4 23:49:41.574608 kubelet[2748]: I0904 23:49:41.574536 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a98fe27d-f9b2-46e3-a9a5-1f9fa577590e-registration-dir\") pod \"csi-node-driver-28pvw\" (UID: \"a98fe27d-f9b2-46e3-a9a5-1f9fa577590e\") " pod="calico-system/csi-node-driver-28pvw" Sep 4 23:49:41.575077 kubelet[2748]: I0904 23:49:41.574633 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a98fe27d-f9b2-46e3-a9a5-1f9fa577590e-socket-dir\") pod \"csi-node-driver-28pvw\" (UID: \"a98fe27d-f9b2-46e3-a9a5-1f9fa577590e\") " pod="calico-system/csi-node-driver-28pvw" Sep 4 23:49:41.575077 kubelet[2748]: I0904 23:49:41.574664 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72rr4\" (UniqueName: \"kubernetes.io/projected/a98fe27d-f9b2-46e3-a9a5-1f9fa577590e-kube-api-access-72rr4\") pod \"csi-node-driver-28pvw\" (UID: \"a98fe27d-f9b2-46e3-a9a5-1f9fa577590e\") " pod="calico-system/csi-node-driver-28pvw" Sep 4 23:49:41.575077 kubelet[2748]: I0904 23:49:41.574763 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a98fe27d-f9b2-46e3-a9a5-1f9fa577590e-kubelet-dir\") pod \"csi-node-driver-28pvw\" (UID: \"a98fe27d-f9b2-46e3-a9a5-1f9fa577590e\") " pod="calico-system/csi-node-driver-28pvw" Sep 4 23:49:41.575077 kubelet[2748]: I0904 23:49:41.574789 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a98fe27d-f9b2-46e3-a9a5-1f9fa577590e-varrun\") pod \"csi-node-driver-28pvw\" (UID: \"a98fe27d-f9b2-46e3-a9a5-1f9fa577590e\") " pod="calico-system/csi-node-driver-28pvw" Sep 4 23:49:41.580176 kubelet[2748]: E0904 23:49:41.579567 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.580176 kubelet[2748]: W0904 23:49:41.579614 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.580176 kubelet[2748]: E0904 23:49:41.579645 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.587399 kubelet[2748]: E0904 23:49:41.587294 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.587399 kubelet[2748]: W0904 23:49:41.587378 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.587399 kubelet[2748]: E0904 23:49:41.587409 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.587907 kubelet[2748]: E0904 23:49:41.587854 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.587907 kubelet[2748]: W0904 23:49:41.587872 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.587907 kubelet[2748]: E0904 23:49:41.587885 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.629760 containerd[1493]: time="2025-09-04T23:49:41.629707226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cffdd7445-wbxs2,Uid:c0260609-a8c9-4e23-a707-2faee6934c9d,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d8cfaa90904c4d1d1744bf862243d0e1aa09d5a712a1a6a33dc0c4acb0da205\"" Sep 4 23:49:41.633290 kubelet[2748]: E0904 23:49:41.633236 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:41.634515 containerd[1493]: time="2025-09-04T23:49:41.634462649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 4 23:49:41.675802 kubelet[2748]: E0904 23:49:41.675747 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.675802 kubelet[2748]: W0904 23:49:41.675782 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.675802 kubelet[2748]: E0904 23:49:41.675811 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.677408 kubelet[2748]: E0904 23:49:41.677381 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.677408 kubelet[2748]: W0904 23:49:41.677401 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.677484 kubelet[2748]: E0904 23:49:41.677416 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.679015 kubelet[2748]: E0904 23:49:41.678777 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.679015 kubelet[2748]: W0904 23:49:41.678795 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.679015 kubelet[2748]: E0904 23:49:41.678810 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.679484 kubelet[2748]: E0904 23:49:41.679447 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.679484 kubelet[2748]: W0904 23:49:41.679469 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.679484 kubelet[2748]: E0904 23:49:41.679482 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.681078 kubelet[2748]: E0904 23:49:41.679844 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.681078 kubelet[2748]: W0904 23:49:41.679867 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.681078 kubelet[2748]: E0904 23:49:41.679880 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.681078 kubelet[2748]: E0904 23:49:41.680077 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.681078 kubelet[2748]: W0904 23:49:41.680085 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.681078 kubelet[2748]: E0904 23:49:41.680116 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.681078 kubelet[2748]: E0904 23:49:41.680376 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.681078 kubelet[2748]: W0904 23:49:41.680386 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.681078 kubelet[2748]: E0904 23:49:41.680396 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.681078 kubelet[2748]: E0904 23:49:41.680628 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.681371 kubelet[2748]: W0904 23:49:41.680638 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.681371 kubelet[2748]: E0904 23:49:41.680650 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.681371 kubelet[2748]: E0904 23:49:41.681027 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.681371 kubelet[2748]: W0904 23:49:41.681038 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.681371 kubelet[2748]: E0904 23:49:41.681048 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.681371 kubelet[2748]: E0904 23:49:41.681278 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.681371 kubelet[2748]: W0904 23:49:41.681290 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.681371 kubelet[2748]: E0904 23:49:41.681300 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.681755 kubelet[2748]: E0904 23:49:41.681524 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.681755 kubelet[2748]: W0904 23:49:41.681532 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.681755 kubelet[2748]: E0904 23:49:41.681542 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.681826 kubelet[2748]: E0904 23:49:41.681771 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.681826 kubelet[2748]: W0904 23:49:41.681779 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.681826 kubelet[2748]: E0904 23:49:41.681787 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.682386 kubelet[2748]: E0904 23:49:41.682036 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.682386 kubelet[2748]: W0904 23:49:41.682048 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.682386 kubelet[2748]: E0904 23:49:41.682057 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.682386 kubelet[2748]: E0904 23:49:41.682303 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.682386 kubelet[2748]: W0904 23:49:41.682313 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.682386 kubelet[2748]: E0904 23:49:41.682322 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.682592 kubelet[2748]: E0904 23:49:41.682582 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.682592 kubelet[2748]: W0904 23:49:41.682592 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.682685 kubelet[2748]: E0904 23:49:41.682601 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.683212 kubelet[2748]: E0904 23:49:41.682997 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.683212 kubelet[2748]: W0904 23:49:41.683015 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.683212 kubelet[2748]: E0904 23:49:41.683027 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.683394 kubelet[2748]: E0904 23:49:41.683371 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.683394 kubelet[2748]: W0904 23:49:41.683388 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.683464 kubelet[2748]: E0904 23:49:41.683402 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.683748 kubelet[2748]: E0904 23:49:41.683730 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.683748 kubelet[2748]: W0904 23:49:41.683743 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.683849 kubelet[2748]: E0904 23:49:41.683753 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.685790 kubelet[2748]: E0904 23:49:41.684036 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.685790 kubelet[2748]: W0904 23:49:41.684050 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.685790 kubelet[2748]: E0904 23:49:41.684061 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.685790 kubelet[2748]: E0904 23:49:41.684358 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.685790 kubelet[2748]: W0904 23:49:41.684368 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.685790 kubelet[2748]: E0904 23:49:41.684378 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.685790 kubelet[2748]: E0904 23:49:41.684593 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.685790 kubelet[2748]: W0904 23:49:41.684601 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.685790 kubelet[2748]: E0904 23:49:41.684609 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.685790 kubelet[2748]: E0904 23:49:41.684838 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.686063 kubelet[2748]: W0904 23:49:41.684847 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.686063 kubelet[2748]: E0904 23:49:41.684859 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.686063 kubelet[2748]: E0904 23:49:41.685242 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.686063 kubelet[2748]: W0904 23:49:41.685257 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.686063 kubelet[2748]: E0904 23:49:41.685271 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.686063 kubelet[2748]: E0904 23:49:41.685511 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.686063 kubelet[2748]: W0904 23:49:41.685520 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.686063 kubelet[2748]: E0904 23:49:41.685530 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.686063 kubelet[2748]: E0904 23:49:41.685771 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.686063 kubelet[2748]: W0904 23:49:41.685780 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.686322 kubelet[2748]: E0904 23:49:41.685791 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.697149 kubelet[2748]: E0904 23:49:41.696230 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:41.697149 kubelet[2748]: W0904 23:49:41.696263 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:41.697149 kubelet[2748]: E0904 23:49:41.696289 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:41.765527 containerd[1493]: time="2025-09-04T23:49:41.765371448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wnjtd,Uid:8633afc7-7cbd-43c9-a21b-20406480e7ab,Namespace:calico-system,Attempt:0,}" Sep 4 23:49:41.813646 containerd[1493]: time="2025-09-04T23:49:41.813090170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:49:41.813646 containerd[1493]: time="2025-09-04T23:49:41.813197012Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:49:41.813646 containerd[1493]: time="2025-09-04T23:49:41.813211008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:41.813646 containerd[1493]: time="2025-09-04T23:49:41.813347706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:41.834354 systemd[1]: Started cri-containerd-d73c3ad910f60ce2bef3e3930982250d2e4d9b49dce9af6e0674e95007c0c1c5.scope - libcontainer container d73c3ad910f60ce2bef3e3930982250d2e4d9b49dce9af6e0674e95007c0c1c5. Sep 4 23:49:41.882035 containerd[1493]: time="2025-09-04T23:49:41.881978182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wnjtd,Uid:8633afc7-7cbd-43c9-a21b-20406480e7ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"d73c3ad910f60ce2bef3e3930982250d2e4d9b49dce9af6e0674e95007c0c1c5\"" Sep 4 23:49:43.153985 kubelet[2748]: E0904 23:49:43.153924 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-28pvw" podUID="a98fe27d-f9b2-46e3-a9a5-1f9fa577590e" Sep 4 23:49:43.346479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1492974727.mount: Deactivated successfully. Sep 4 23:49:43.956615 containerd[1493]: time="2025-09-04T23:49:43.956539571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:43.957810 containerd[1493]: time="2025-09-04T23:49:43.957711060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 4 23:49:43.959117 containerd[1493]: time="2025-09-04T23:49:43.959070241Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:43.962490 containerd[1493]: time="2025-09-04T23:49:43.962445682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:43.963310 containerd[1493]: time="2025-09-04T23:49:43.963235491Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.328733929s" Sep 4 23:49:43.963310 containerd[1493]: time="2025-09-04T23:49:43.963294822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 4 23:49:43.964539 containerd[1493]: time="2025-09-04T23:49:43.964445512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 4 23:49:43.979225 containerd[1493]: time="2025-09-04T23:49:43.979168919Z" level=info msg="CreateContainer within sandbox \"7d8cfaa90904c4d1d1744bf862243d0e1aa09d5a712a1a6a33dc0c4acb0da205\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 23:49:44.004616 containerd[1493]: time="2025-09-04T23:49:44.004530686Z" level=info msg="CreateContainer within sandbox \"7d8cfaa90904c4d1d1744bf862243d0e1aa09d5a712a1a6a33dc0c4acb0da205\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8a59148877a39884ba9667dd1c0e8b9a91b723735720e97fc6ea909492984abe\"" Sep 4 23:49:44.005533 containerd[1493]: time="2025-09-04T23:49:44.005470066Z" level=info msg="StartContainer for \"8a59148877a39884ba9667dd1c0e8b9a91b723735720e97fc6ea909492984abe\"" Sep 4 23:49:44.042361 systemd[1]: Started cri-containerd-8a59148877a39884ba9667dd1c0e8b9a91b723735720e97fc6ea909492984abe.scope - libcontainer container 8a59148877a39884ba9667dd1c0e8b9a91b723735720e97fc6ea909492984abe. Sep 4 23:49:44.100826 containerd[1493]: time="2025-09-04T23:49:44.100761124Z" level=info msg="StartContainer for \"8a59148877a39884ba9667dd1c0e8b9a91b723735720e97fc6ea909492984abe\" returns successfully" Sep 4 23:49:44.242902 kubelet[2748]: E0904 23:49:44.242300 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:44.258400 kubelet[2748]: I0904 23:49:44.256651 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7cffdd7445-wbxs2" podStartSLOduration=1.926449737 podStartE2EDuration="4.256618009s" podCreationTimestamp="2025-09-04 23:49:40 +0000 UTC" firstStartedPulling="2025-09-04 23:49:41.63405489 +0000 UTC m=+36.579138794" lastFinishedPulling="2025-09-04 23:49:43.964223162 +0000 UTC m=+38.909307066" observedRunningTime="2025-09-04 23:49:44.255829443 +0000 UTC m=+39.200913337" watchObservedRunningTime="2025-09-04 23:49:44.256618009 +0000 UTC m=+39.201701913" Sep 4 23:49:44.285964 kubelet[2748]: E0904 23:49:44.285921 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.286355 kubelet[2748]: W0904 23:49:44.286188 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.286355 kubelet[2748]: E0904 23:49:44.286226 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.286829 kubelet[2748]: E0904 23:49:44.286563 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.286980 kubelet[2748]: W0904 23:49:44.286903 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.286980 kubelet[2748]: E0904 23:49:44.286924 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.287417 kubelet[2748]: E0904 23:49:44.287329 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.287417 kubelet[2748]: W0904 23:49:44.287346 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.287417 kubelet[2748]: E0904 23:49:44.287358 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.288154 kubelet[2748]: E0904 23:49:44.288035 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.288154 kubelet[2748]: W0904 23:49:44.288050 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.288154 kubelet[2748]: E0904 23:49:44.288063 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.288903 kubelet[2748]: E0904 23:49:44.288886 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.289034 kubelet[2748]: W0904 23:49:44.288966 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.289034 kubelet[2748]: E0904 23:49:44.288984 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.289491 kubelet[2748]: E0904 23:49:44.289411 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.289491 kubelet[2748]: W0904 23:49:44.289425 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.289491 kubelet[2748]: E0904 23:49:44.289438 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.290405 kubelet[2748]: E0904 23:49:44.290323 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.290405 kubelet[2748]: W0904 23:49:44.290337 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.290405 kubelet[2748]: E0904 23:49:44.290352 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.291570 kubelet[2748]: E0904 23:49:44.291466 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.291570 kubelet[2748]: W0904 23:49:44.291481 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.291570 kubelet[2748]: E0904 23:49:44.291494 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.292510 kubelet[2748]: E0904 23:49:44.291930 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.292510 kubelet[2748]: W0904 23:49:44.291944 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.292510 kubelet[2748]: E0904 23:49:44.291955 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.292997 kubelet[2748]: E0904 23:49:44.292916 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.292997 kubelet[2748]: W0904 23:49:44.292931 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.292997 kubelet[2748]: E0904 23:49:44.292944 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.295673 kubelet[2748]: E0904 23:49:44.295486 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.295673 kubelet[2748]: W0904 23:49:44.295521 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.295673 kubelet[2748]: E0904 23:49:44.295542 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.296672 kubelet[2748]: E0904 23:49:44.296499 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.296672 kubelet[2748]: W0904 23:49:44.296572 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.296672 kubelet[2748]: E0904 23:49:44.296591 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.298587 kubelet[2748]: E0904 23:49:44.298334 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.298587 kubelet[2748]: W0904 23:49:44.298350 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.298587 kubelet[2748]: E0904 23:49:44.298365 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.299168 kubelet[2748]: E0904 23:49:44.299032 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.299168 kubelet[2748]: W0904 23:49:44.299046 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.299168 kubelet[2748]: E0904 23:49:44.299091 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.299835 kubelet[2748]: E0904 23:49:44.299743 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.299835 kubelet[2748]: W0904 23:49:44.299817 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.299835 kubelet[2748]: E0904 23:49:44.299828 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.300336 kubelet[2748]: E0904 23:49:44.300317 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.300336 kubelet[2748]: W0904 23:49:44.300332 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.300473 kubelet[2748]: E0904 23:49:44.300344 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.300645 kubelet[2748]: E0904 23:49:44.300618 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.300645 kubelet[2748]: W0904 23:49:44.300642 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.300819 kubelet[2748]: E0904 23:49:44.300652 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.300945 kubelet[2748]: E0904 23:49:44.300931 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.300945 kubelet[2748]: W0904 23:49:44.300942 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.300945 kubelet[2748]: E0904 23:49:44.300954 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.302463 kubelet[2748]: E0904 23:49:44.301830 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.302463 kubelet[2748]: W0904 23:49:44.301844 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.302463 kubelet[2748]: E0904 23:49:44.301859 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.302463 kubelet[2748]: E0904 23:49:44.302222 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.302463 kubelet[2748]: W0904 23:49:44.302233 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.302463 kubelet[2748]: E0904 23:49:44.302245 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.302668 kubelet[2748]: E0904 23:49:44.302572 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.302668 kubelet[2748]: W0904 23:49:44.302583 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.302737 kubelet[2748]: E0904 23:49:44.302712 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.303065 kubelet[2748]: E0904 23:49:44.303048 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.303065 kubelet[2748]: W0904 23:49:44.303063 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.303229 kubelet[2748]: E0904 23:49:44.303075 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.303562 kubelet[2748]: E0904 23:49:44.303544 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.303562 kubelet[2748]: W0904 23:49:44.303560 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.303675 kubelet[2748]: E0904 23:49:44.303573 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.303968 kubelet[2748]: E0904 23:49:44.303933 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.303968 kubelet[2748]: W0904 23:49:44.303948 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.303968 kubelet[2748]: E0904 23:49:44.303960 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.305252 kubelet[2748]: E0904 23:49:44.305204 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.305252 kubelet[2748]: W0904 23:49:44.305223 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.305252 kubelet[2748]: E0904 23:49:44.305236 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.305511 kubelet[2748]: E0904 23:49:44.305480 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.305511 kubelet[2748]: W0904 23:49:44.305495 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.305511 kubelet[2748]: E0904 23:49:44.305507 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.305788 kubelet[2748]: E0904 23:49:44.305767 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.305788 kubelet[2748]: W0904 23:49:44.305783 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.305937 kubelet[2748]: E0904 23:49:44.305795 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.306115 kubelet[2748]: E0904 23:49:44.306096 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.306115 kubelet[2748]: W0904 23:49:44.306112 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.306115 kubelet[2748]: E0904 23:49:44.306140 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.306673 kubelet[2748]: E0904 23:49:44.306621 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.306673 kubelet[2748]: W0904 23:49:44.306655 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.306673 kubelet[2748]: E0904 23:49:44.306668 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.307221 kubelet[2748]: E0904 23:49:44.307165 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.307221 kubelet[2748]: W0904 23:49:44.307187 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.307221 kubelet[2748]: E0904 23:49:44.307199 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.308488 kubelet[2748]: E0904 23:49:44.307536 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.308488 kubelet[2748]: W0904 23:49:44.307552 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.308488 kubelet[2748]: E0904 23:49:44.307565 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.308488 kubelet[2748]: E0904 23:49:44.307845 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.308488 kubelet[2748]: W0904 23:49:44.307858 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.308488 kubelet[2748]: E0904 23:49:44.307870 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:44.308488 kubelet[2748]: E0904 23:49:44.308455 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:44.308698 kubelet[2748]: W0904 23:49:44.308466 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:44.308698 kubelet[2748]: E0904 23:49:44.308518 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.151036 kubelet[2748]: E0904 23:49:45.150924 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-28pvw" podUID="a98fe27d-f9b2-46e3-a9a5-1f9fa577590e" Sep 4 23:49:45.244447 kubelet[2748]: E0904 23:49:45.244368 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:45.306565 kubelet[2748]: E0904 23:49:45.306512 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.306565 kubelet[2748]: W0904 23:49:45.306546 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.306565 kubelet[2748]: E0904 23:49:45.306577 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.306923 kubelet[2748]: E0904 23:49:45.306903 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.306923 kubelet[2748]: W0904 23:49:45.306916 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.306968 kubelet[2748]: E0904 23:49:45.306930 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.307957 kubelet[2748]: E0904 23:49:45.307930 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.307957 kubelet[2748]: W0904 23:49:45.307949 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.308013 kubelet[2748]: E0904 23:49:45.307962 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.308269 kubelet[2748]: E0904 23:49:45.308253 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.308269 kubelet[2748]: W0904 23:49:45.308268 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.308322 kubelet[2748]: E0904 23:49:45.308280 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.308528 kubelet[2748]: E0904 23:49:45.308512 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.308528 kubelet[2748]: W0904 23:49:45.308525 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.308587 kubelet[2748]: E0904 23:49:45.308537 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.308847 kubelet[2748]: E0904 23:49:45.308824 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.308847 kubelet[2748]: W0904 23:49:45.308838 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.308905 kubelet[2748]: E0904 23:49:45.308849 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.309072 kubelet[2748]: E0904 23:49:45.309058 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.309072 kubelet[2748]: W0904 23:49:45.309070 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.309145 kubelet[2748]: E0904 23:49:45.309081 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.309544 kubelet[2748]: E0904 23:49:45.309520 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.309544 kubelet[2748]: W0904 23:49:45.309537 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.309609 kubelet[2748]: E0904 23:49:45.309550 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.309963 kubelet[2748]: E0904 23:49:45.309942 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.309963 kubelet[2748]: W0904 23:49:45.309956 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.310022 kubelet[2748]: E0904 23:49:45.309967 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.310554 kubelet[2748]: E0904 23:49:45.310528 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.310554 kubelet[2748]: W0904 23:49:45.310542 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.310554 kubelet[2748]: E0904 23:49:45.310554 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.311154 kubelet[2748]: E0904 23:49:45.310788 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.311154 kubelet[2748]: W0904 23:49:45.310802 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.311154 kubelet[2748]: E0904 23:49:45.310814 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.311154 kubelet[2748]: E0904 23:49:45.311031 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.311154 kubelet[2748]: W0904 23:49:45.311041 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.311154 kubelet[2748]: E0904 23:49:45.311053 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.311372 kubelet[2748]: E0904 23:49:45.311354 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.311372 kubelet[2748]: W0904 23:49:45.311367 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.311581 kubelet[2748]: E0904 23:49:45.311379 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.311641 kubelet[2748]: E0904 23:49:45.311630 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.311678 kubelet[2748]: W0904 23:49:45.311641 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.311678 kubelet[2748]: E0904 23:49:45.311651 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.311904 kubelet[2748]: E0904 23:49:45.311869 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.311904 kubelet[2748]: W0904 23:49:45.311882 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.311904 kubelet[2748]: E0904 23:49:45.311895 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.312249 kubelet[2748]: E0904 23:49:45.312216 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.312249 kubelet[2748]: W0904 23:49:45.312230 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.312249 kubelet[2748]: E0904 23:49:45.312242 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.312553 kubelet[2748]: E0904 23:49:45.312538 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.312553 kubelet[2748]: W0904 23:49:45.312552 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.312643 kubelet[2748]: E0904 23:49:45.312566 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.314061 kubelet[2748]: E0904 23:49:45.312902 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.314061 kubelet[2748]: W0904 23:49:45.312916 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.314061 kubelet[2748]: E0904 23:49:45.312927 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.314061 kubelet[2748]: E0904 23:49:45.313251 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.314061 kubelet[2748]: W0904 23:49:45.313262 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.314061 kubelet[2748]: E0904 23:49:45.313278 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.314061 kubelet[2748]: E0904 23:49:45.313594 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.314061 kubelet[2748]: W0904 23:49:45.313604 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.314061 kubelet[2748]: E0904 23:49:45.313626 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.314061 kubelet[2748]: E0904 23:49:45.313890 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.314405 kubelet[2748]: W0904 23:49:45.313900 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.314405 kubelet[2748]: E0904 23:49:45.313911 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.314405 kubelet[2748]: E0904 23:49:45.314249 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.314405 kubelet[2748]: W0904 23:49:45.314260 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.314405 kubelet[2748]: E0904 23:49:45.314272 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.314634 kubelet[2748]: E0904 23:49:45.314540 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.314634 kubelet[2748]: W0904 23:49:45.314555 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.314634 kubelet[2748]: E0904 23:49:45.314567 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.316144 kubelet[2748]: E0904 23:49:45.314848 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.316144 kubelet[2748]: W0904 23:49:45.314861 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.316144 kubelet[2748]: E0904 23:49:45.314873 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.316144 kubelet[2748]: E0904 23:49:45.315467 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.316144 kubelet[2748]: W0904 23:49:45.315480 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.316144 kubelet[2748]: E0904 23:49:45.315514 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.316144 kubelet[2748]: E0904 23:49:45.315814 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.316144 kubelet[2748]: W0904 23:49:45.315826 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.316144 kubelet[2748]: E0904 23:49:45.315837 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.316419 kubelet[2748]: E0904 23:49:45.316220 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.316419 kubelet[2748]: W0904 23:49:45.316233 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.316419 kubelet[2748]: E0904 23:49:45.316245 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.316799 kubelet[2748]: E0904 23:49:45.316774 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.316799 kubelet[2748]: W0904 23:49:45.316791 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.316861 kubelet[2748]: E0904 23:49:45.316803 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.317173 kubelet[2748]: E0904 23:49:45.317154 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.317173 kubelet[2748]: W0904 23:49:45.317170 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.317247 kubelet[2748]: E0904 23:49:45.317182 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.317846 kubelet[2748]: E0904 23:49:45.317459 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.317846 kubelet[2748]: W0904 23:49:45.317472 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.317846 kubelet[2748]: E0904 23:49:45.317485 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.317846 kubelet[2748]: E0904 23:49:45.317734 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.317846 kubelet[2748]: W0904 23:49:45.317744 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.317846 kubelet[2748]: E0904 23:49:45.317754 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.318034 kubelet[2748]: E0904 23:49:45.317976 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.318034 kubelet[2748]: W0904 23:49:45.317986 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.318034 kubelet[2748]: E0904 23:49:45.317998 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.318569 kubelet[2748]: E0904 23:49:45.318548 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:45.318569 kubelet[2748]: W0904 23:49:45.318563 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:45.318679 kubelet[2748]: E0904 23:49:45.318574 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:45.830835 containerd[1493]: time="2025-09-04T23:49:45.830751202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:45.850522 containerd[1493]: time="2025-09-04T23:49:45.850312424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 4 23:49:45.921192 containerd[1493]: time="2025-09-04T23:49:45.921088326Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:46.001068 containerd[1493]: time="2025-09-04T23:49:46.000983331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:46.002329 containerd[1493]: time="2025-09-04T23:49:46.002266399Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 2.037780732s" Sep 4 23:49:46.002329 containerd[1493]: time="2025-09-04T23:49:46.002325631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 4 23:49:46.175895 containerd[1493]: time="2025-09-04T23:49:46.175684907Z" level=info msg="CreateContainer within sandbox \"d73c3ad910f60ce2bef3e3930982250d2e4d9b49dce9af6e0674e95007c0c1c5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 23:49:46.246778 kubelet[2748]: E0904 23:49:46.246711 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:46.317926 kubelet[2748]: E0904 23:49:46.317877 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.317926 kubelet[2748]: W0904 23:49:46.317909 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.317926 kubelet[2748]: E0904 23:49:46.317941 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.318299 kubelet[2748]: E0904 23:49:46.318277 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.318299 kubelet[2748]: W0904 23:49:46.318290 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.318299 kubelet[2748]: E0904 23:49:46.318300 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.318612 kubelet[2748]: E0904 23:49:46.318576 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.318612 kubelet[2748]: W0904 23:49:46.318589 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.318702 kubelet[2748]: E0904 23:49:46.318622 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.319002 kubelet[2748]: E0904 23:49:46.318968 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.319002 kubelet[2748]: W0904 23:49:46.318985 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.319002 kubelet[2748]: E0904 23:49:46.318996 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.319295 kubelet[2748]: E0904 23:49:46.319278 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.319295 kubelet[2748]: W0904 23:49:46.319289 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.319440 kubelet[2748]: E0904 23:49:46.319299 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.319773 kubelet[2748]: E0904 23:49:46.319726 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.319833 kubelet[2748]: W0904 23:49:46.319768 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.319833 kubelet[2748]: E0904 23:49:46.319805 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.320884 kubelet[2748]: E0904 23:49:46.320855 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.320884 kubelet[2748]: W0904 23:49:46.320870 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.320884 kubelet[2748]: E0904 23:49:46.320882 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.321281 kubelet[2748]: E0904 23:49:46.321191 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.321281 kubelet[2748]: W0904 23:49:46.321207 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.321281 kubelet[2748]: E0904 23:49:46.321216 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.321612 kubelet[2748]: E0904 23:49:46.321480 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.321612 kubelet[2748]: W0904 23:49:46.321489 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.321612 kubelet[2748]: E0904 23:49:46.321498 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.321923 kubelet[2748]: E0904 23:49:46.321727 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.321923 kubelet[2748]: W0904 23:49:46.321736 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.321923 kubelet[2748]: E0904 23:49:46.321745 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.321923 kubelet[2748]: E0904 23:49:46.321924 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.321923 kubelet[2748]: W0904 23:49:46.321933 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.321923 kubelet[2748]: E0904 23:49:46.321942 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.322245 kubelet[2748]: E0904 23:49:46.322109 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.322245 kubelet[2748]: W0904 23:49:46.322117 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.322245 kubelet[2748]: E0904 23:49:46.322140 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.322408 kubelet[2748]: E0904 23:49:46.322386 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.322408 kubelet[2748]: W0904 23:49:46.322398 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.322478 kubelet[2748]: E0904 23:49:46.322409 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.322639 kubelet[2748]: E0904 23:49:46.322609 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.322639 kubelet[2748]: W0904 23:49:46.322622 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.322639 kubelet[2748]: E0904 23:49:46.322631 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.322831 kubelet[2748]: E0904 23:49:46.322812 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.322831 kubelet[2748]: W0904 23:49:46.322822 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.322831 kubelet[2748]: E0904 23:49:46.322831 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.323134 kubelet[2748]: E0904 23:49:46.323104 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.323134 kubelet[2748]: W0904 23:49:46.323117 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.323198 kubelet[2748]: E0904 23:49:46.323141 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.323508 kubelet[2748]: E0904 23:49:46.323464 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.323552 kubelet[2748]: W0904 23:49:46.323503 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.323609 kubelet[2748]: E0904 23:49:46.323541 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.324202 kubelet[2748]: E0904 23:49:46.324176 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.324202 kubelet[2748]: W0904 23:49:46.324198 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.324288 kubelet[2748]: E0904 23:49:46.324213 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.324538 kubelet[2748]: E0904 23:49:46.324493 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.324538 kubelet[2748]: W0904 23:49:46.324527 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.324802 kubelet[2748]: E0904 23:49:46.324561 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.324928 kubelet[2748]: E0904 23:49:46.324898 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.324928 kubelet[2748]: W0904 23:49:46.324912 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.324928 kubelet[2748]: E0904 23:49:46.324924 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.325232 kubelet[2748]: E0904 23:49:46.325195 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.325232 kubelet[2748]: W0904 23:49:46.325214 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.325232 kubelet[2748]: E0904 23:49:46.325230 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.325492 kubelet[2748]: E0904 23:49:46.325469 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.325492 kubelet[2748]: W0904 23:49:46.325485 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.325583 kubelet[2748]: E0904 23:49:46.325499 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.325869 kubelet[2748]: E0904 23:49:46.325839 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.325869 kubelet[2748]: W0904 23:49:46.325867 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.325926 kubelet[2748]: E0904 23:49:46.325880 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.326408 kubelet[2748]: E0904 23:49:46.326391 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.326408 kubelet[2748]: W0904 23:49:46.326405 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.326408 kubelet[2748]: E0904 23:49:46.326417 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.326713 kubelet[2748]: E0904 23:49:46.326695 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.326713 kubelet[2748]: W0904 23:49:46.326710 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.326776 kubelet[2748]: E0904 23:49:46.326722 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.327017 kubelet[2748]: E0904 23:49:46.327002 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.327293 kubelet[2748]: W0904 23:49:46.327098 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.327293 kubelet[2748]: E0904 23:49:46.327114 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.327431 kubelet[2748]: E0904 23:49:46.327418 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.327519 kubelet[2748]: W0904 23:49:46.327478 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.327519 kubelet[2748]: E0904 23:49:46.327493 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.327750 kubelet[2748]: E0904 23:49:46.327740 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.327750 kubelet[2748]: W0904 23:49:46.327748 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.327803 kubelet[2748]: E0904 23:49:46.327758 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.328082 kubelet[2748]: E0904 23:49:46.328060 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.328082 kubelet[2748]: W0904 23:49:46.328073 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.328082 kubelet[2748]: E0904 23:49:46.328082 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.328565 kubelet[2748]: E0904 23:49:46.328538 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.328565 kubelet[2748]: W0904 23:49:46.328560 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.328677 kubelet[2748]: E0904 23:49:46.328584 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.328905 kubelet[2748]: E0904 23:49:46.328884 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.328905 kubelet[2748]: W0904 23:49:46.328897 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.328905 kubelet[2748]: E0904 23:49:46.328906 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.329218 kubelet[2748]: E0904 23:49:46.329198 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.329218 kubelet[2748]: W0904 23:49:46.329212 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.329287 kubelet[2748]: E0904 23:49:46.329222 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:46.329509 kubelet[2748]: E0904 23:49:46.329488 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:49:46.329509 kubelet[2748]: W0904 23:49:46.329502 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:49:46.329608 kubelet[2748]: E0904 23:49:46.329512 2748 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:49:47.155699 kubelet[2748]: E0904 23:49:47.155534 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-28pvw" podUID="a98fe27d-f9b2-46e3-a9a5-1f9fa577590e" Sep 4 23:49:47.915476 containerd[1493]: time="2025-09-04T23:49:47.915399741Z" level=info msg="CreateContainer within sandbox \"d73c3ad910f60ce2bef3e3930982250d2e4d9b49dce9af6e0674e95007c0c1c5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cda18bd358d9172805d75f4c48425c3c2f4b79033ba59c967297c6001a6ea96a\"" Sep 4 23:49:47.916172 containerd[1493]: time="2025-09-04T23:49:47.916068021Z" level=info msg="StartContainer for \"cda18bd358d9172805d75f4c48425c3c2f4b79033ba59c967297c6001a6ea96a\"" Sep 4 23:49:47.950323 systemd[1]: Started cri-containerd-cda18bd358d9172805d75f4c48425c3c2f4b79033ba59c967297c6001a6ea96a.scope - libcontainer container cda18bd358d9172805d75f4c48425c3c2f4b79033ba59c967297c6001a6ea96a. Sep 4 23:49:47.998660 systemd[1]: cri-containerd-cda18bd358d9172805d75f4c48425c3c2f4b79033ba59c967297c6001a6ea96a.scope: Deactivated successfully. Sep 4 23:49:48.156080 containerd[1493]: time="2025-09-04T23:49:48.155982965Z" level=info msg="StartContainer for \"cda18bd358d9172805d75f4c48425c3c2f4b79033ba59c967297c6001a6ea96a\" returns successfully" Sep 4 23:49:48.182596 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cda18bd358d9172805d75f4c48425c3c2f4b79033ba59c967297c6001a6ea96a-rootfs.mount: Deactivated successfully. Sep 4 23:49:49.151172 kubelet[2748]: E0904 23:49:49.151075 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-28pvw" podUID="a98fe27d-f9b2-46e3-a9a5-1f9fa577590e" Sep 4 23:49:49.435064 containerd[1493]: time="2025-09-04T23:49:49.434871039Z" level=info msg="shim disconnected" id=cda18bd358d9172805d75f4c48425c3c2f4b79033ba59c967297c6001a6ea96a namespace=k8s.io Sep 4 23:49:49.435064 containerd[1493]: time="2025-09-04T23:49:49.434954637Z" level=warning msg="cleaning up after shim disconnected" id=cda18bd358d9172805d75f4c48425c3c2f4b79033ba59c967297c6001a6ea96a namespace=k8s.io Sep 4 23:49:49.435064 containerd[1493]: time="2025-09-04T23:49:49.434969936Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:49:50.261450 containerd[1493]: time="2025-09-04T23:49:50.261107655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 4 23:49:51.151063 kubelet[2748]: E0904 23:49:51.150946 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-28pvw" podUID="a98fe27d-f9b2-46e3-a9a5-1f9fa577590e" Sep 4 23:49:53.152225 kubelet[2748]: E0904 23:49:53.152156 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-28pvw" podUID="a98fe27d-f9b2-46e3-a9a5-1f9fa577590e" Sep 4 23:49:55.152038 kubelet[2748]: E0904 23:49:55.151934 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-28pvw" podUID="a98fe27d-f9b2-46e3-a9a5-1f9fa577590e" Sep 4 23:49:56.868698 containerd[1493]: time="2025-09-04T23:49:56.868582435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:56.871817 containerd[1493]: time="2025-09-04T23:49:56.871758458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 4 23:49:56.877166 containerd[1493]: time="2025-09-04T23:49:56.876929006Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:56.884614 containerd[1493]: time="2025-09-04T23:49:56.884050901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:56.885444 containerd[1493]: time="2025-09-04T23:49:56.885176551Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 6.623998906s" Sep 4 23:49:56.885444 containerd[1493]: time="2025-09-04T23:49:56.885218510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 4 23:49:56.898158 containerd[1493]: time="2025-09-04T23:49:56.898074525Z" level=info msg="CreateContainer within sandbox \"d73c3ad910f60ce2bef3e3930982250d2e4d9b49dce9af6e0674e95007c0c1c5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 23:49:56.941527 containerd[1493]: time="2025-09-04T23:49:56.941232425Z" level=info msg="CreateContainer within sandbox \"d73c3ad910f60ce2bef3e3930982250d2e4d9b49dce9af6e0674e95007c0c1c5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ba3b9c41aadd06c70f1522abc258a32f015b0768b7eafdae7e3800bf73c1c6b5\"" Sep 4 23:49:56.942381 containerd[1493]: time="2025-09-04T23:49:56.942291630Z" level=info msg="StartContainer for \"ba3b9c41aadd06c70f1522abc258a32f015b0768b7eafdae7e3800bf73c1c6b5\"" Sep 4 23:49:56.982896 systemd[1]: run-containerd-runc-k8s.io-ba3b9c41aadd06c70f1522abc258a32f015b0768b7eafdae7e3800bf73c1c6b5-runc.CPGOu7.mount: Deactivated successfully. Sep 4 23:49:57.002897 systemd[1]: Started cri-containerd-ba3b9c41aadd06c70f1522abc258a32f015b0768b7eafdae7e3800bf73c1c6b5.scope - libcontainer container ba3b9c41aadd06c70f1522abc258a32f015b0768b7eafdae7e3800bf73c1c6b5. Sep 4 23:49:57.059782 containerd[1493]: time="2025-09-04T23:49:57.059711693Z" level=info msg="StartContainer for \"ba3b9c41aadd06c70f1522abc258a32f015b0768b7eafdae7e3800bf73c1c6b5\" returns successfully" Sep 4 23:49:57.154338 kubelet[2748]: E0904 23:49:57.151400 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-28pvw" podUID="a98fe27d-f9b2-46e3-a9a5-1f9fa577590e" Sep 4 23:49:59.151738 kubelet[2748]: E0904 23:49:59.151658 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-28pvw" podUID="a98fe27d-f9b2-46e3-a9a5-1f9fa577590e" Sep 4 23:50:01.151208 kubelet[2748]: E0904 23:50:01.151118 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-28pvw" podUID="a98fe27d-f9b2-46e3-a9a5-1f9fa577590e" Sep 4 23:50:02.944213 systemd[1]: cri-containerd-ba3b9c41aadd06c70f1522abc258a32f015b0768b7eafdae7e3800bf73c1c6b5.scope: Deactivated successfully. Sep 4 23:50:02.944657 systemd[1]: cri-containerd-ba3b9c41aadd06c70f1522abc258a32f015b0768b7eafdae7e3800bf73c1c6b5.scope: Consumed 684ms CPU time, 176.1M memory peak, 2.8M read from disk, 171.3M written to disk. Sep 4 23:50:02.973895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba3b9c41aadd06c70f1522abc258a32f015b0768b7eafdae7e3800bf73c1c6b5-rootfs.mount: Deactivated successfully. Sep 4 23:50:02.998164 kubelet[2748]: I0904 23:50:02.997759 2748 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 23:50:03.158426 systemd[1]: Created slice kubepods-besteffort-poda98fe27d_f9b2_46e3_a9a5_1f9fa577590e.slice - libcontainer container kubepods-besteffort-poda98fe27d_f9b2_46e3_a9a5_1f9fa577590e.slice. Sep 4 23:50:03.201174 containerd[1493]: time="2025-09-04T23:50:03.200961441Z" level=info msg="shim disconnected" id=ba3b9c41aadd06c70f1522abc258a32f015b0768b7eafdae7e3800bf73c1c6b5 namespace=k8s.io Sep 4 23:50:03.201174 containerd[1493]: time="2025-09-04T23:50:03.201033447Z" level=warning msg="cleaning up after shim disconnected" id=ba3b9c41aadd06c70f1522abc258a32f015b0768b7eafdae7e3800bf73c1c6b5 namespace=k8s.io Sep 4 23:50:03.201174 containerd[1493]: time="2025-09-04T23:50:03.201043976Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:50:03.212556 containerd[1493]: time="2025-09-04T23:50:03.212483789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-28pvw,Uid:a98fe27d-f9b2-46e3-a9a5-1f9fa577590e,Namespace:calico-system,Attempt:0,}" Sep 4 23:50:03.653022 systemd[1]: Created slice kubepods-burstable-podcfe8ce7b_a276_49b2_b396_cff3cb8bdc33.slice - libcontainer container kubepods-burstable-podcfe8ce7b_a276_49b2_b396_cff3cb8bdc33.slice. Sep 4 23:50:03.736946 systemd[1]: Created slice kubepods-besteffort-pod53444f07_cee6_411d_9fba_8985320ac23a.slice - libcontainer container kubepods-besteffort-pod53444f07_cee6_411d_9fba_8985320ac23a.slice. Sep 4 23:50:03.762077 systemd[1]: Created slice kubepods-burstable-pod72827434_065b_42a4_a2fc_ad0f0cccd362.slice - libcontainer container kubepods-burstable-pod72827434_065b_42a4_a2fc_ad0f0cccd362.slice. Sep 4 23:50:03.770888 systemd[1]: Created slice kubepods-besteffort-podc4b9b516_8bab_431f_a718_d4b546f75053.slice - libcontainer container kubepods-besteffort-podc4b9b516_8bab_431f_a718_d4b546f75053.slice. Sep 4 23:50:03.817020 systemd[1]: Created slice kubepods-besteffort-podd64b33cd_4dd7_45f1_991e_5a59b10c693d.slice - libcontainer container kubepods-besteffort-podd64b33cd_4dd7_45f1_991e_5a59b10c693d.slice. Sep 4 23:50:03.822762 containerd[1493]: time="2025-09-04T23:50:03.821955223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 4 23:50:03.830776 kubelet[2748]: I0904 23:50:03.829960 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcg2b\" (UniqueName: \"kubernetes.io/projected/d64b33cd-4dd7-45f1-991e-5a59b10c693d-kube-api-access-kcg2b\") pod \"calico-apiserver-78c45fc5f4-77ks8\" (UID: \"d64b33cd-4dd7-45f1-991e-5a59b10c693d\") " pod="calico-apiserver/calico-apiserver-78c45fc5f4-77ks8" Sep 4 23:50:03.830776 kubelet[2748]: I0904 23:50:03.830024 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75d773f2-41cf-4b0b-ab63-23228f5501e2-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-d8krf\" (UID: \"75d773f2-41cf-4b0b-ab63-23228f5501e2\") " pod="calico-system/goldmane-54d579b49d-d8krf" Sep 4 23:50:03.830776 kubelet[2748]: I0904 23:50:03.830045 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/24110c1b-5cfd-4ff0-b414-4604e0a02acc-calico-apiserver-certs\") pod \"calico-apiserver-78c45fc5f4-726pc\" (UID: \"24110c1b-5cfd-4ff0-b414-4604e0a02acc\") " pod="calico-apiserver/calico-apiserver-78c45fc5f4-726pc" Sep 4 23:50:03.830776 kubelet[2748]: I0904 23:50:03.830081 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p54v\" (UniqueName: \"kubernetes.io/projected/53444f07-cee6-411d-9fba-8985320ac23a-kube-api-access-9p54v\") pod \"calico-kube-controllers-9fcb7bdb4-fjbtd\" (UID: \"53444f07-cee6-411d-9fba-8985320ac23a\") " pod="calico-system/calico-kube-controllers-9fcb7bdb4-fjbtd" Sep 4 23:50:03.830776 kubelet[2748]: I0904 23:50:03.830106 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d64b33cd-4dd7-45f1-991e-5a59b10c693d-calico-apiserver-certs\") pod \"calico-apiserver-78c45fc5f4-77ks8\" (UID: \"d64b33cd-4dd7-45f1-991e-5a59b10c693d\") " pod="calico-apiserver/calico-apiserver-78c45fc5f4-77ks8" Sep 4 23:50:03.831086 kubelet[2748]: I0904 23:50:03.830146 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d773f2-41cf-4b0b-ab63-23228f5501e2-config\") pod \"goldmane-54d579b49d-d8krf\" (UID: \"75d773f2-41cf-4b0b-ab63-23228f5501e2\") " pod="calico-system/goldmane-54d579b49d-d8krf" Sep 4 23:50:03.831086 kubelet[2748]: I0904 23:50:03.830167 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpvfb\" (UniqueName: \"kubernetes.io/projected/72827434-065b-42a4-a2fc-ad0f0cccd362-kube-api-access-fpvfb\") pod \"coredns-674b8bbfcf-q7s5n\" (UID: \"72827434-065b-42a4-a2fc-ad0f0cccd362\") " pod="kube-system/coredns-674b8bbfcf-q7s5n" Sep 4 23:50:03.831086 kubelet[2748]: I0904 23:50:03.830189 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t2cr\" (UniqueName: \"kubernetes.io/projected/75d773f2-41cf-4b0b-ab63-23228f5501e2-kube-api-access-5t2cr\") pod \"goldmane-54d579b49d-d8krf\" (UID: \"75d773f2-41cf-4b0b-ab63-23228f5501e2\") " pod="calico-system/goldmane-54d579b49d-d8krf" Sep 4 23:50:03.831086 kubelet[2748]: I0904 23:50:03.830207 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4b9b516-8bab-431f-a718-d4b546f75053-whisker-ca-bundle\") pod \"whisker-754df77889-htb2w\" (UID: \"c4b9b516-8bab-431f-a718-d4b546f75053\") " pod="calico-system/whisker-754df77889-htb2w" Sep 4 23:50:03.831086 kubelet[2748]: I0904 23:50:03.830239 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c4b9b516-8bab-431f-a718-d4b546f75053-whisker-backend-key-pair\") pod \"whisker-754df77889-htb2w\" (UID: \"c4b9b516-8bab-431f-a718-d4b546f75053\") " pod="calico-system/whisker-754df77889-htb2w" Sep 4 23:50:03.834022 kubelet[2748]: I0904 23:50:03.830273 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msgmw\" (UniqueName: \"kubernetes.io/projected/cfe8ce7b-a276-49b2-b396-cff3cb8bdc33-kube-api-access-msgmw\") pod \"coredns-674b8bbfcf-z7mb6\" (UID: \"cfe8ce7b-a276-49b2-b396-cff3cb8bdc33\") " pod="kube-system/coredns-674b8bbfcf-z7mb6" Sep 4 23:50:03.834022 kubelet[2748]: I0904 23:50:03.830294 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/75d773f2-41cf-4b0b-ab63-23228f5501e2-goldmane-key-pair\") pod \"goldmane-54d579b49d-d8krf\" (UID: \"75d773f2-41cf-4b0b-ab63-23228f5501e2\") " pod="calico-system/goldmane-54d579b49d-d8krf" Sep 4 23:50:03.834022 kubelet[2748]: I0904 23:50:03.830323 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5s2k\" (UniqueName: \"kubernetes.io/projected/c4b9b516-8bab-431f-a718-d4b546f75053-kube-api-access-b5s2k\") pod \"whisker-754df77889-htb2w\" (UID: \"c4b9b516-8bab-431f-a718-d4b546f75053\") " pod="calico-system/whisker-754df77889-htb2w" Sep 4 23:50:03.834022 kubelet[2748]: I0904 23:50:03.830353 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72827434-065b-42a4-a2fc-ad0f0cccd362-config-volume\") pod \"coredns-674b8bbfcf-q7s5n\" (UID: \"72827434-065b-42a4-a2fc-ad0f0cccd362\") " pod="kube-system/coredns-674b8bbfcf-q7s5n" Sep 4 23:50:03.834022 kubelet[2748]: I0904 23:50:03.830388 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpxpw\" (UniqueName: \"kubernetes.io/projected/24110c1b-5cfd-4ff0-b414-4604e0a02acc-kube-api-access-tpxpw\") pod \"calico-apiserver-78c45fc5f4-726pc\" (UID: \"24110c1b-5cfd-4ff0-b414-4604e0a02acc\") " pod="calico-apiserver/calico-apiserver-78c45fc5f4-726pc" Sep 4 23:50:03.834244 kubelet[2748]: I0904 23:50:03.830419 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53444f07-cee6-411d-9fba-8985320ac23a-tigera-ca-bundle\") pod \"calico-kube-controllers-9fcb7bdb4-fjbtd\" (UID: \"53444f07-cee6-411d-9fba-8985320ac23a\") " pod="calico-system/calico-kube-controllers-9fcb7bdb4-fjbtd" Sep 4 23:50:03.834244 kubelet[2748]: I0904 23:50:03.830445 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cfe8ce7b-a276-49b2-b396-cff3cb8bdc33-config-volume\") pod \"coredns-674b8bbfcf-z7mb6\" (UID: \"cfe8ce7b-a276-49b2-b396-cff3cb8bdc33\") " pod="kube-system/coredns-674b8bbfcf-z7mb6" Sep 4 23:50:03.837418 systemd[1]: Created slice kubepods-besteffort-pod24110c1b_5cfd_4ff0_b414_4604e0a02acc.slice - libcontainer container kubepods-besteffort-pod24110c1b_5cfd_4ff0_b414_4604e0a02acc.slice. Sep 4 23:50:03.848609 systemd[1]: Created slice kubepods-besteffort-pod75d773f2_41cf_4b0b_ab63_23228f5501e2.slice - libcontainer container kubepods-besteffort-pod75d773f2_41cf_4b0b_ab63_23228f5501e2.slice. Sep 4 23:50:04.014517 containerd[1493]: time="2025-09-04T23:50:04.013893669Z" level=error msg="Failed to destroy network for sandbox \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:04.017509 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29-shm.mount: Deactivated successfully. Sep 4 23:50:04.021093 containerd[1493]: time="2025-09-04T23:50:04.020917464Z" level=error msg="encountered an error cleaning up failed sandbox \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:04.021093 containerd[1493]: time="2025-09-04T23:50:04.021059381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-28pvw,Uid:a98fe27d-f9b2-46e3-a9a5-1f9fa577590e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:04.021542 kubelet[2748]: E0904 23:50:04.021448 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:04.021992 kubelet[2748]: E0904 23:50:04.021833 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-28pvw" Sep 4 23:50:04.021992 kubelet[2748]: E0904 23:50:04.021871 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-28pvw" Sep 4 23:50:04.030182 kubelet[2748]: E0904 23:50:04.029925 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-28pvw_calico-system(a98fe27d-f9b2-46e3-a9a5-1f9fa577590e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-28pvw_calico-system(a98fe27d-f9b2-46e3-a9a5-1f9fa577590e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-28pvw" podUID="a98fe27d-f9b2-46e3-a9a5-1f9fa577590e" Sep 4 23:50:04.049640 containerd[1493]: time="2025-09-04T23:50:04.049559490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fcb7bdb4-fjbtd,Uid:53444f07-cee6-411d-9fba-8985320ac23a,Namespace:calico-system,Attempt:0,}" Sep 4 23:50:04.067104 kubelet[2748]: E0904 23:50:04.066988 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:04.068597 containerd[1493]: time="2025-09-04T23:50:04.068533830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q7s5n,Uid:72827434-065b-42a4-a2fc-ad0f0cccd362,Namespace:kube-system,Attempt:0,}" Sep 4 23:50:04.075072 containerd[1493]: time="2025-09-04T23:50:04.075003513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-754df77889-htb2w,Uid:c4b9b516-8bab-431f-a718-d4b546f75053,Namespace:calico-system,Attempt:0,}" Sep 4 23:50:04.125050 containerd[1493]: time="2025-09-04T23:50:04.125000111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c45fc5f4-77ks8,Uid:d64b33cd-4dd7-45f1-991e-5a59b10c693d,Namespace:calico-apiserver,Attempt:0,}" Sep 4 23:50:04.145789 containerd[1493]: time="2025-09-04T23:50:04.145722502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c45fc5f4-726pc,Uid:24110c1b-5cfd-4ff0-b414-4604e0a02acc,Namespace:calico-apiserver,Attempt:0,}" Sep 4 23:50:04.154271 containerd[1493]: time="2025-09-04T23:50:04.154212169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d8krf,Uid:75d773f2-41cf-4b0b-ab63-23228f5501e2,Namespace:calico-system,Attempt:0,}" Sep 4 23:50:04.256156 kubelet[2748]: E0904 23:50:04.256084 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:04.256955 containerd[1493]: time="2025-09-04T23:50:04.256833744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z7mb6,Uid:cfe8ce7b-a276-49b2-b396-cff3cb8bdc33,Namespace:kube-system,Attempt:0,}" Sep 4 23:50:04.828802 kubelet[2748]: I0904 23:50:04.828766 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29" Sep 4 23:50:04.841217 containerd[1493]: time="2025-09-04T23:50:04.841154956Z" level=info msg="StopPodSandbox for \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\"" Sep 4 23:50:04.861436 containerd[1493]: time="2025-09-04T23:50:04.861346088Z" level=info msg="Ensure that sandbox b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29 in task-service has been cleanup successfully" Sep 4 23:50:04.861703 containerd[1493]: time="2025-09-04T23:50:04.861678484Z" level=info msg="TearDown network for sandbox \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\" successfully" Sep 4 23:50:04.861703 containerd[1493]: time="2025-09-04T23:50:04.861696528Z" level=info msg="StopPodSandbox for \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\" returns successfully" Sep 4 23:50:04.862638 containerd[1493]: time="2025-09-04T23:50:04.862609036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-28pvw,Uid:a98fe27d-f9b2-46e3-a9a5-1f9fa577590e,Namespace:calico-system,Attempt:1,}" Sep 4 23:50:04.864797 systemd[1]: run-netns-cni\x2dfd734c76\x2db72f\x2d66eb\x2d8990\x2d1a68b4228314.mount: Deactivated successfully. Sep 4 23:50:06.038077 containerd[1493]: time="2025-09-04T23:50:06.037990849Z" level=error msg="Failed to destroy network for sandbox \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.038700 containerd[1493]: time="2025-09-04T23:50:06.038557466Z" level=error msg="encountered an error cleaning up failed sandbox \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.038700 containerd[1493]: time="2025-09-04T23:50:06.038634861Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q7s5n,Uid:72827434-065b-42a4-a2fc-ad0f0cccd362,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.039371 kubelet[2748]: E0904 23:50:06.039317 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.039728 kubelet[2748]: E0904 23:50:06.039408 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q7s5n" Sep 4 23:50:06.039728 kubelet[2748]: E0904 23:50:06.039442 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q7s5n" Sep 4 23:50:06.039728 kubelet[2748]: E0904 23:50:06.039508 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-q7s5n_kube-system(72827434-065b-42a4-a2fc-ad0f0cccd362)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-q7s5n_kube-system(72827434-065b-42a4-a2fc-ad0f0cccd362)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-q7s5n" podUID="72827434-065b-42a4-a2fc-ad0f0cccd362" Sep 4 23:50:06.086248 containerd[1493]: time="2025-09-04T23:50:06.086038121Z" level=error msg="Failed to destroy network for sandbox \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.090903 containerd[1493]: time="2025-09-04T23:50:06.090828942Z" level=error msg="encountered an error cleaning up failed sandbox \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.091066 containerd[1493]: time="2025-09-04T23:50:06.090892361Z" level=error msg="Failed to destroy network for sandbox \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.091066 containerd[1493]: time="2025-09-04T23:50:06.090966682Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c45fc5f4-77ks8,Uid:d64b33cd-4dd7-45f1-991e-5a59b10c693d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.091499 containerd[1493]: time="2025-09-04T23:50:06.091362858Z" level=error msg="encountered an error cleaning up failed sandbox \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.091499 containerd[1493]: time="2025-09-04T23:50:06.091411298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fcb7bdb4-fjbtd,Uid:53444f07-cee6-411d-9fba-8985320ac23a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.091635 kubelet[2748]: E0904 23:50:06.091399 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.091635 kubelet[2748]: E0904 23:50:06.091524 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c45fc5f4-77ks8" Sep 4 23:50:06.091635 kubelet[2748]: E0904 23:50:06.091579 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c45fc5f4-77ks8" Sep 4 23:50:06.095650 kubelet[2748]: E0904 23:50:06.091661 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78c45fc5f4-77ks8_calico-apiserver(d64b33cd-4dd7-45f1-991e-5a59b10c693d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78c45fc5f4-77ks8_calico-apiserver(d64b33cd-4dd7-45f1-991e-5a59b10c693d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78c45fc5f4-77ks8" podUID="d64b33cd-4dd7-45f1-991e-5a59b10c693d" Sep 4 23:50:06.095650 kubelet[2748]: E0904 23:50:06.094319 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.095650 kubelet[2748]: E0904 23:50:06.094377 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9fcb7bdb4-fjbtd" Sep 4 23:50:06.095812 kubelet[2748]: E0904 23:50:06.094406 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9fcb7bdb4-fjbtd" Sep 4 23:50:06.095812 kubelet[2748]: E0904 23:50:06.094467 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9fcb7bdb4-fjbtd_calico-system(53444f07-cee6-411d-9fba-8985320ac23a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9fcb7bdb4-fjbtd_calico-system(53444f07-cee6-411d-9fba-8985320ac23a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9fcb7bdb4-fjbtd" podUID="53444f07-cee6-411d-9fba-8985320ac23a" Sep 4 23:50:06.111303 containerd[1493]: time="2025-09-04T23:50:06.111162940Z" level=error msg="Failed to destroy network for sandbox \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.115034 containerd[1493]: time="2025-09-04T23:50:06.114964128Z" level=error msg="Failed to destroy network for sandbox \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.120783 containerd[1493]: time="2025-09-04T23:50:06.120709005Z" level=error msg="Failed to destroy network for sandbox \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.121330 containerd[1493]: time="2025-09-04T23:50:06.121040319Z" level=error msg="encountered an error cleaning up failed sandbox \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.121494 containerd[1493]: time="2025-09-04T23:50:06.121462674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z7mb6,Uid:cfe8ce7b-a276-49b2-b396-cff3cb8bdc33,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.123261 kubelet[2748]: E0904 23:50:06.122389 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.123261 kubelet[2748]: E0904 23:50:06.122472 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z7mb6" Sep 4 23:50:06.123261 kubelet[2748]: E0904 23:50:06.122508 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z7mb6" Sep 4 23:50:06.123527 kubelet[2748]: E0904 23:50:06.122588 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-z7mb6_kube-system(cfe8ce7b-a276-49b2-b396-cff3cb8bdc33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-z7mb6_kube-system(cfe8ce7b-a276-49b2-b396-cff3cb8bdc33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-z7mb6" podUID="cfe8ce7b-a276-49b2-b396-cff3cb8bdc33" Sep 4 23:50:06.127644 containerd[1493]: time="2025-09-04T23:50:06.121601977Z" level=error msg="encountered an error cleaning up failed sandbox \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.128429 containerd[1493]: time="2025-09-04T23:50:06.126840761Z" level=error msg="Failed to destroy network for sandbox \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.128429 containerd[1493]: time="2025-09-04T23:50:06.121898956Z" level=error msg="encountered an error cleaning up failed sandbox \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.128429 containerd[1493]: time="2025-09-04T23:50:06.128240688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-754df77889-htb2w,Uid:c4b9b516-8bab-431f-a718-d4b546f75053,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.129816 kubelet[2748]: E0904 23:50:06.129368 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.129816 kubelet[2748]: E0904 23:50:06.129460 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-754df77889-htb2w" Sep 4 23:50:06.129816 kubelet[2748]: E0904 23:50:06.129490 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-754df77889-htb2w" Sep 4 23:50:06.129960 containerd[1493]: time="2025-09-04T23:50:06.127939711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d8krf,Uid:75d773f2-41cf-4b0b-ab63-23228f5501e2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.130037 kubelet[2748]: E0904 23:50:06.129561 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-754df77889-htb2w_calico-system(c4b9b516-8bab-431f-a718-d4b546f75053)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-754df77889-htb2w_calico-system(c4b9b516-8bab-431f-a718-d4b546f75053)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-754df77889-htb2w" podUID="c4b9b516-8bab-431f-a718-d4b546f75053" Sep 4 23:50:06.130037 kubelet[2748]: E0904 23:50:06.129830 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.130037 kubelet[2748]: E0904 23:50:06.129861 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-d8krf" Sep 4 23:50:06.130181 kubelet[2748]: E0904 23:50:06.129884 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-d8krf" Sep 4 23:50:06.130181 kubelet[2748]: E0904 23:50:06.129922 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-d8krf_calico-system(75d773f2-41cf-4b0b-ab63-23228f5501e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-d8krf_calico-system(75d773f2-41cf-4b0b-ab63-23228f5501e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-d8krf" podUID="75d773f2-41cf-4b0b-ab63-23228f5501e2" Sep 4 23:50:06.130375 containerd[1493]: time="2025-09-04T23:50:06.130304854Z" level=error msg="encountered an error cleaning up failed sandbox \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.130375 containerd[1493]: time="2025-09-04T23:50:06.130374195Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-28pvw,Uid:a98fe27d-f9b2-46e3-a9a5-1f9fa577590e,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.130596 kubelet[2748]: E0904 23:50:06.130532 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.130596 kubelet[2748]: E0904 23:50:06.130572 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-28pvw" Sep 4 23:50:06.130596 kubelet[2748]: E0904 23:50:06.130598 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-28pvw" Sep 4 23:50:06.130922 kubelet[2748]: E0904 23:50:06.130647 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-28pvw_calico-system(a98fe27d-f9b2-46e3-a9a5-1f9fa577590e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-28pvw_calico-system(a98fe27d-f9b2-46e3-a9a5-1f9fa577590e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-28pvw" podUID="a98fe27d-f9b2-46e3-a9a5-1f9fa577590e" Sep 4 23:50:06.150637 containerd[1493]: time="2025-09-04T23:50:06.150505392Z" level=error msg="Failed to destroy network for sandbox \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.151160 containerd[1493]: time="2025-09-04T23:50:06.151040840Z" level=error msg="encountered an error cleaning up failed sandbox \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.151160 containerd[1493]: time="2025-09-04T23:50:06.151145016Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c45fc5f4-726pc,Uid:24110c1b-5cfd-4ff0-b414-4604e0a02acc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.151578 kubelet[2748]: E0904 23:50:06.151431 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:06.151578 kubelet[2748]: E0904 23:50:06.151519 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c45fc5f4-726pc" Sep 4 23:50:06.151578 kubelet[2748]: E0904 23:50:06.151551 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c45fc5f4-726pc" Sep 4 23:50:06.151723 kubelet[2748]: E0904 23:50:06.151617 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78c45fc5f4-726pc_calico-apiserver(24110c1b-5cfd-4ff0-b414-4604e0a02acc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78c45fc5f4-726pc_calico-apiserver(24110c1b-5cfd-4ff0-b414-4604e0a02acc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78c45fc5f4-726pc" podUID="24110c1b-5cfd-4ff0-b414-4604e0a02acc" Sep 4 23:50:06.840810 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c-shm.mount: Deactivated successfully. Sep 4 23:50:06.841378 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4-shm.mount: Deactivated successfully. Sep 4 23:50:06.841477 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8-shm.mount: Deactivated successfully. Sep 4 23:50:06.845020 kubelet[2748]: I0904 23:50:06.844979 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a" Sep 4 23:50:06.845756 containerd[1493]: time="2025-09-04T23:50:06.845718524Z" level=info msg="StopPodSandbox for \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\"" Sep 4 23:50:06.846019 kubelet[2748]: I0904 23:50:06.845847 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b" Sep 4 23:50:06.846073 containerd[1493]: time="2025-09-04T23:50:06.846006546Z" level=info msg="Ensure that sandbox cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a in task-service has been cleanup successfully" Sep 4 23:50:06.848979 containerd[1493]: time="2025-09-04T23:50:06.846264341Z" level=info msg="TearDown network for sandbox \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\" successfully" Sep 4 23:50:06.848979 containerd[1493]: time="2025-09-04T23:50:06.846286734Z" level=info msg="StopPodSandbox for \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\" returns successfully" Sep 4 23:50:06.848979 containerd[1493]: time="2025-09-04T23:50:06.847237374Z" level=info msg="StopPodSandbox for \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\"" Sep 4 23:50:06.848979 containerd[1493]: time="2025-09-04T23:50:06.847328836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c45fc5f4-77ks8,Uid:d64b33cd-4dd7-45f1-991e-5a59b10c693d,Namespace:calico-apiserver,Attempt:1,}" Sep 4 23:50:06.848979 containerd[1493]: time="2025-09-04T23:50:06.847470803Z" level=info msg="Ensure that sandbox fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b in task-service has been cleanup successfully" Sep 4 23:50:06.848979 containerd[1493]: time="2025-09-04T23:50:06.847655571Z" level=info msg="TearDown network for sandbox \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\" successfully" Sep 4 23:50:06.848979 containerd[1493]: time="2025-09-04T23:50:06.847668145Z" level=info msg="StopPodSandbox for \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\" returns successfully" Sep 4 23:50:06.848979 containerd[1493]: time="2025-09-04T23:50:06.848262344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c45fc5f4-726pc,Uid:24110c1b-5cfd-4ff0-b414-4604e0a02acc,Namespace:calico-apiserver,Attempt:1,}" Sep 4 23:50:06.848979 containerd[1493]: time="2025-09-04T23:50:06.848489421Z" level=info msg="StopPodSandbox for \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\"" Sep 4 23:50:06.848979 containerd[1493]: time="2025-09-04T23:50:06.848634244Z" level=info msg="Ensure that sandbox 39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c in task-service has been cleanup successfully" Sep 4 23:50:06.848979 containerd[1493]: time="2025-09-04T23:50:06.848942785Z" level=info msg="TearDown network for sandbox \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\" successfully" Sep 4 23:50:06.848979 containerd[1493]: time="2025-09-04T23:50:06.848960268Z" level=info msg="StopPodSandbox for \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\" returns successfully" Sep 4 23:50:06.849328 kubelet[2748]: I0904 23:50:06.847468 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c" Sep 4 23:50:06.849560 containerd[1493]: time="2025-09-04T23:50:06.849513920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-754df77889-htb2w,Uid:c4b9b516-8bab-431f-a718-d4b546f75053,Namespace:calico-system,Attempt:1,}" Sep 4 23:50:06.849739 kubelet[2748]: I0904 23:50:06.849716 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4" Sep 4 23:50:06.851692 containerd[1493]: time="2025-09-04T23:50:06.851329539Z" level=info msg="StopPodSandbox for \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\"" Sep 4 23:50:06.851692 containerd[1493]: time="2025-09-04T23:50:06.851538363Z" level=info msg="Ensure that sandbox 62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4 in task-service has been cleanup successfully" Sep 4 23:50:06.852993 containerd[1493]: time="2025-09-04T23:50:06.851891667Z" level=info msg="TearDown network for sandbox \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\" successfully" Sep 4 23:50:06.852993 containerd[1493]: time="2025-09-04T23:50:06.851913409Z" level=info msg="StopPodSandbox for \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\" returns successfully" Sep 4 23:50:06.852993 containerd[1493]: time="2025-09-04T23:50:06.852841607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q7s5n,Uid:72827434-065b-42a4-a2fc-ad0f0cccd362,Namespace:kube-system,Attempt:1,}" Sep 4 23:50:06.853146 kubelet[2748]: E0904 23:50:06.852498 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:06.854897 systemd[1]: run-netns-cni\x2d368dc29b\x2de280\x2df374\x2d1078\x2d7125e4abb1d7.mount: Deactivated successfully. Sep 4 23:50:06.855551 systemd[1]: run-netns-cni\x2df343c853\x2dbace\x2d2de1\x2d1ddf\x2df3f1ef1be5dc.mount: Deactivated successfully. Sep 4 23:50:06.858814 systemd[1]: run-netns-cni\x2df8c274a1\x2d4ac4\x2df3ff\x2d10bf\x2d1bc424b06319.mount: Deactivated successfully. Sep 4 23:50:06.860492 kubelet[2748]: I0904 23:50:06.860466 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0" Sep 4 23:50:06.861234 containerd[1493]: time="2025-09-04T23:50:06.861089537Z" level=info msg="StopPodSandbox for \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\"" Sep 4 23:50:06.861488 containerd[1493]: time="2025-09-04T23:50:06.861382298Z" level=info msg="Ensure that sandbox 08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0 in task-service has been cleanup successfully" Sep 4 23:50:06.861775 containerd[1493]: time="2025-09-04T23:50:06.861687363Z" level=info msg="TearDown network for sandbox \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\" successfully" Sep 4 23:50:06.861775 containerd[1493]: time="2025-09-04T23:50:06.861711198Z" level=info msg="StopPodSandbox for \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\" returns successfully" Sep 4 23:50:06.864489 containerd[1493]: time="2025-09-04T23:50:06.864461166Z" level=info msg="StopPodSandbox for \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\"" Sep 4 23:50:06.864643 containerd[1493]: time="2025-09-04T23:50:06.864567406Z" level=info msg="TearDown network for sandbox \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\" successfully" Sep 4 23:50:06.864643 containerd[1493]: time="2025-09-04T23:50:06.864584348Z" level=info msg="StopPodSandbox for \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\" returns successfully" Sep 4 23:50:06.864927 kubelet[2748]: I0904 23:50:06.864759 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c" Sep 4 23:50:06.865162 systemd[1]: run-netns-cni\x2d5fa93753\x2de9fe\x2deb0c\x2dc6b2\x2d0cb260bcf743.mount: Deactivated successfully. Sep 4 23:50:06.867351 kubelet[2748]: E0904 23:50:06.866686 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:06.867351 kubelet[2748]: I0904 23:50:06.866843 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d" Sep 4 23:50:06.867431 containerd[1493]: time="2025-09-04T23:50:06.865290106Z" level=info msg="StopPodSandbox for \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\"" Sep 4 23:50:06.867431 containerd[1493]: time="2025-09-04T23:50:06.865512155Z" level=info msg="Ensure that sandbox 3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c in task-service has been cleanup successfully" Sep 4 23:50:06.867431 containerd[1493]: time="2025-09-04T23:50:06.866190302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-28pvw,Uid:a98fe27d-f9b2-46e3-a9a5-1f9fa577590e,Namespace:calico-system,Attempt:2,}" Sep 4 23:50:06.867431 containerd[1493]: time="2025-09-04T23:50:06.866503371Z" level=info msg="TearDown network for sandbox \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\" successfully" Sep 4 23:50:06.867431 containerd[1493]: time="2025-09-04T23:50:06.866520243Z" level=info msg="StopPodSandbox for \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\" returns successfully" Sep 4 23:50:06.867431 containerd[1493]: time="2025-09-04T23:50:06.867312425Z" level=info msg="StopPodSandbox for \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\"" Sep 4 23:50:06.867431 containerd[1493]: time="2025-09-04T23:50:06.867417112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z7mb6,Uid:cfe8ce7b-a276-49b2-b396-cff3cb8bdc33,Namespace:kube-system,Attempt:1,}" Sep 4 23:50:06.865386 systemd[1]: run-netns-cni\x2d5d66f2d3\x2d930f\x2da8ab\x2d5523\x2d4ddc17999d86.mount: Deactivated successfully. Sep 4 23:50:06.867705 containerd[1493]: time="2025-09-04T23:50:06.867544402Z" level=info msg="Ensure that sandbox d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d in task-service has been cleanup successfully" Sep 4 23:50:06.867757 containerd[1493]: time="2025-09-04T23:50:06.867731414Z" level=info msg="TearDown network for sandbox \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\" successfully" Sep 4 23:50:06.867757 containerd[1493]: time="2025-09-04T23:50:06.867753114Z" level=info msg="StopPodSandbox for \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\" returns successfully" Sep 4 23:50:06.869073 containerd[1493]: time="2025-09-04T23:50:06.868722340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d8krf,Uid:75d773f2-41cf-4b0b-ab63-23228f5501e2,Namespace:calico-system,Attempt:1,}" Sep 4 23:50:06.869444 kubelet[2748]: I0904 23:50:06.869345 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8" Sep 4 23:50:06.870203 containerd[1493]: time="2025-09-04T23:50:06.869885520Z" level=info msg="StopPodSandbox for \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\"" Sep 4 23:50:06.870203 containerd[1493]: time="2025-09-04T23:50:06.870051102Z" level=info msg="Ensure that sandbox caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8 in task-service has been cleanup successfully" Sep 4 23:50:06.870409 systemd[1]: run-netns-cni\x2d7469703c\x2d1bb5\x2d02cf\x2d8b57\x2dada99fb09606.mount: Deactivated successfully. Sep 4 23:50:06.870596 systemd[1]: run-netns-cni\x2d5ee6fcc8\x2d78f6\x2ddba9\x2d3980\x2d706932118154.mount: Deactivated successfully. Sep 4 23:50:06.870680 containerd[1493]: time="2025-09-04T23:50:06.870613350Z" level=info msg="TearDown network for sandbox \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\" successfully" Sep 4 23:50:06.870680 containerd[1493]: time="2025-09-04T23:50:06.870632396Z" level=info msg="StopPodSandbox for \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\" returns successfully" Sep 4 23:50:06.871354 containerd[1493]: time="2025-09-04T23:50:06.871083095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fcb7bdb4-fjbtd,Uid:53444f07-cee6-411d-9fba-8985320ac23a,Namespace:calico-system,Attempt:1,}" Sep 4 23:50:06.874194 systemd[1]: run-netns-cni\x2df37c9a9f\x2d11c5\x2d6bfc\x2d41d7\x2d0b4eb01d024f.mount: Deactivated successfully. Sep 4 23:50:16.425280 containerd[1493]: time="2025-09-04T23:50:16.425224257Z" level=error msg="Failed to destroy network for sandbox \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:16.427851 containerd[1493]: time="2025-09-04T23:50:16.426259105Z" level=error msg="encountered an error cleaning up failed sandbox \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:16.427851 containerd[1493]: time="2025-09-04T23:50:16.426336731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c45fc5f4-726pc,Uid:24110c1b-5cfd-4ff0-b414-4604e0a02acc,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:16.428020 kubelet[2748]: E0904 23:50:16.426659 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:16.428020 kubelet[2748]: E0904 23:50:16.426734 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c45fc5f4-726pc" Sep 4 23:50:16.428020 kubelet[2748]: E0904 23:50:16.426767 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c45fc5f4-726pc" Sep 4 23:50:16.428513 kubelet[2748]: E0904 23:50:16.426830 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78c45fc5f4-726pc_calico-apiserver(24110c1b-5cfd-4ff0-b414-4604e0a02acc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78c45fc5f4-726pc_calico-apiserver(24110c1b-5cfd-4ff0-b414-4604e0a02acc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78c45fc5f4-726pc" podUID="24110c1b-5cfd-4ff0-b414-4604e0a02acc" Sep 4 23:50:16.429052 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e-shm.mount: Deactivated successfully. Sep 4 23:50:16.892284 kubelet[2748]: I0904 23:50:16.892151 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e" Sep 4 23:50:16.892751 containerd[1493]: time="2025-09-04T23:50:16.892702354Z" level=info msg="StopPodSandbox for \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\"" Sep 4 23:50:16.892960 containerd[1493]: time="2025-09-04T23:50:16.892937396Z" level=info msg="Ensure that sandbox d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e in task-service has been cleanup successfully" Sep 4 23:50:16.893245 containerd[1493]: time="2025-09-04T23:50:16.893179132Z" level=info msg="TearDown network for sandbox \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\" successfully" Sep 4 23:50:16.893245 containerd[1493]: time="2025-09-04T23:50:16.893195142Z" level=info msg="StopPodSandbox for \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\" returns successfully" Sep 4 23:50:16.895155 containerd[1493]: time="2025-09-04T23:50:16.893429633Z" level=info msg="StopPodSandbox for \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\"" Sep 4 23:50:16.895155 containerd[1493]: time="2025-09-04T23:50:16.893504915Z" level=info msg="TearDown network for sandbox \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\" successfully" Sep 4 23:50:16.895155 containerd[1493]: time="2025-09-04T23:50:16.893513831Z" level=info msg="StopPodSandbox for \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\" returns successfully" Sep 4 23:50:16.895155 containerd[1493]: time="2025-09-04T23:50:16.893870222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c45fc5f4-726pc,Uid:24110c1b-5cfd-4ff0-b414-4604e0a02acc,Namespace:calico-apiserver,Attempt:2,}" Sep 4 23:50:16.895757 systemd[1]: run-netns-cni\x2d23bb2210\x2dd295\x2d2686\x2dd32b\x2d17c7f2f98e9c.mount: Deactivated successfully. Sep 4 23:50:20.167651 containerd[1493]: time="2025-09-04T23:50:20.167562283Z" level=error msg="Failed to destroy network for sandbox \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:20.168187 containerd[1493]: time="2025-09-04T23:50:20.168091850Z" level=error msg="encountered an error cleaning up failed sandbox \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:20.168187 containerd[1493]: time="2025-09-04T23:50:20.168167603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c45fc5f4-77ks8,Uid:d64b33cd-4dd7-45f1-991e-5a59b10c693d,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:20.168528 kubelet[2748]: E0904 23:50:20.168467 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:20.169028 kubelet[2748]: E0904 23:50:20.168550 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c45fc5f4-77ks8" Sep 4 23:50:20.169028 kubelet[2748]: E0904 23:50:20.168575 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c45fc5f4-77ks8" Sep 4 23:50:20.169028 kubelet[2748]: E0904 23:50:20.168649 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78c45fc5f4-77ks8_calico-apiserver(d64b33cd-4dd7-45f1-991e-5a59b10c693d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78c45fc5f4-77ks8_calico-apiserver(d64b33cd-4dd7-45f1-991e-5a59b10c693d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78c45fc5f4-77ks8" podUID="d64b33cd-4dd7-45f1-991e-5a59b10c693d" Sep 4 23:50:20.171026 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de-shm.mount: Deactivated successfully. Sep 4 23:50:20.504910 containerd[1493]: time="2025-09-04T23:50:20.504739547Z" level=error msg="Failed to destroy network for sandbox \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:20.505375 containerd[1493]: time="2025-09-04T23:50:20.505340317Z" level=error msg="encountered an error cleaning up failed sandbox \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:20.505464 containerd[1493]: time="2025-09-04T23:50:20.505418705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-28pvw,Uid:a98fe27d-f9b2-46e3-a9a5-1f9fa577590e,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:20.505779 kubelet[2748]: E0904 23:50:20.505717 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:20.505962 kubelet[2748]: E0904 23:50:20.505808 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-28pvw" Sep 4 23:50:20.505962 kubelet[2748]: E0904 23:50:20.505843 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-28pvw" Sep 4 23:50:20.505962 kubelet[2748]: E0904 23:50:20.505915 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-28pvw_calico-system(a98fe27d-f9b2-46e3-a9a5-1f9fa577590e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-28pvw_calico-system(a98fe27d-f9b2-46e3-a9a5-1f9fa577590e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-28pvw" podUID="a98fe27d-f9b2-46e3-a9a5-1f9fa577590e" Sep 4 23:50:20.640885 containerd[1493]: time="2025-09-04T23:50:20.640798989Z" level=error msg="Failed to destroy network for sandbox \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:20.641497 containerd[1493]: time="2025-09-04T23:50:20.641333055Z" level=error msg="encountered an error cleaning up failed sandbox \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:20.641497 containerd[1493]: time="2025-09-04T23:50:20.641397196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-754df77889-htb2w,Uid:c4b9b516-8bab-431f-a718-d4b546f75053,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:20.641983 kubelet[2748]: E0904 23:50:20.641902 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:20.642068 kubelet[2748]: E0904 23:50:20.642021 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-754df77889-htb2w" Sep 4 23:50:20.642068 kubelet[2748]: E0904 23:50:20.642054 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-754df77889-htb2w" Sep 4 23:50:20.642423 kubelet[2748]: E0904 23:50:20.642151 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-754df77889-htb2w_calico-system(c4b9b516-8bab-431f-a718-d4b546f75053)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-754df77889-htb2w_calico-system(c4b9b516-8bab-431f-a718-d4b546f75053)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-754df77889-htb2w" podUID="c4b9b516-8bab-431f-a718-d4b546f75053" Sep 4 23:50:20.728287 containerd[1493]: time="2025-09-04T23:50:20.728233659Z" level=error msg="Failed to destroy network for sandbox \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:20.728698 containerd[1493]: time="2025-09-04T23:50:20.728644873Z" level=error msg="encountered an error cleaning up failed sandbox \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:20.728961 containerd[1493]: time="2025-09-04T23:50:20.728716598Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z7mb6,Uid:cfe8ce7b-a276-49b2-b396-cff3cb8bdc33,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:20.729185 kubelet[2748]: E0904 23:50:20.729083 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:20.729250 kubelet[2748]: E0904 23:50:20.729203 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z7mb6" Sep 4 23:50:20.729250 kubelet[2748]: E0904 23:50:20.729233 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z7mb6" Sep 4 23:50:20.729353 kubelet[2748]: E0904 23:50:20.729312 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-z7mb6_kube-system(cfe8ce7b-a276-49b2-b396-cff3cb8bdc33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-z7mb6_kube-system(cfe8ce7b-a276-49b2-b396-cff3cb8bdc33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-z7mb6" podUID="cfe8ce7b-a276-49b2-b396-cff3cb8bdc33" Sep 4 23:50:20.742499 containerd[1493]: time="2025-09-04T23:50:20.742449570Z" level=error msg="Failed to destroy network for sandbox \"5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:20.742834 containerd[1493]: time="2025-09-04T23:50:20.742803636Z" level=error msg="encountered an error cleaning up failed sandbox \"5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:20.742912 containerd[1493]: time="2025-09-04T23:50:20.742856596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d8krf,Uid:75d773f2-41cf-4b0b-ab63-23228f5501e2,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:20.743187 kubelet[2748]: E0904 23:50:20.743113 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:20.743391 kubelet[2748]: E0904 23:50:20.743318 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-d8krf" Sep 4 23:50:20.743391 kubelet[2748]: E0904 23:50:20.743390 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-d8krf" Sep 4 23:50:20.743582 kubelet[2748]: E0904 23:50:20.743458 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-d8krf_calico-system(75d773f2-41cf-4b0b-ab63-23228f5501e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-d8krf_calico-system(75d773f2-41cf-4b0b-ab63-23228f5501e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-d8krf" podUID="75d773f2-41cf-4b0b-ab63-23228f5501e2" Sep 4 23:50:20.904106 kubelet[2748]: I0904 23:50:20.903828 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8" Sep 4 23:50:20.904525 containerd[1493]: time="2025-09-04T23:50:20.904305823Z" level=info msg="StopPodSandbox for \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\"" Sep 4 23:50:20.905158 containerd[1493]: time="2025-09-04T23:50:20.904979050Z" level=info msg="Ensure that sandbox 4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8 in task-service has been cleanup successfully" Sep 4 23:50:20.905813 containerd[1493]: time="2025-09-04T23:50:20.905787312Z" level=info msg="TearDown network for sandbox \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\" successfully" Sep 4 23:50:20.906150 containerd[1493]: time="2025-09-04T23:50:20.906090613Z" level=info msg="StopPodSandbox for \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\" returns successfully" Sep 4 23:50:20.906366 kubelet[2748]: I0904 23:50:20.906346 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561" Sep 4 23:50:20.907166 containerd[1493]: time="2025-09-04T23:50:20.906743853Z" level=info msg="StopPodSandbox for \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\"" Sep 4 23:50:20.907166 containerd[1493]: time="2025-09-04T23:50:20.906790531Z" level=info msg="StopPodSandbox for \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\"" Sep 4 23:50:20.907166 containerd[1493]: time="2025-09-04T23:50:20.906982732Z" level=info msg="TearDown network for sandbox \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\" successfully" Sep 4 23:50:20.907166 containerd[1493]: time="2025-09-04T23:50:20.906995656Z" level=info msg="Ensure that sandbox ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561 in task-service has been cleanup successfully" Sep 4 23:50:20.907379 containerd[1493]: time="2025-09-04T23:50:20.907329655Z" level=info msg="TearDown network for sandbox \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\" successfully" Sep 4 23:50:20.907379 containerd[1493]: time="2025-09-04T23:50:20.907347880Z" level=info msg="StopPodSandbox for \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\" returns successfully" Sep 4 23:50:20.907574 containerd[1493]: time="2025-09-04T23:50:20.906998441Z" level=info msg="StopPodSandbox for \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\" returns successfully" Sep 4 23:50:20.909330 containerd[1493]: time="2025-09-04T23:50:20.909298632Z" level=info msg="StopPodSandbox for \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\"" Sep 4 23:50:20.909399 containerd[1493]: time="2025-09-04T23:50:20.909382249Z" level=info msg="TearDown network for sandbox \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\" successfully" Sep 4 23:50:20.909432 containerd[1493]: time="2025-09-04T23:50:20.909397588Z" level=info msg="StopPodSandbox for \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\" returns successfully" Sep 4 23:50:20.909588 containerd[1493]: time="2025-09-04T23:50:20.909541338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-754df77889-htb2w,Uid:c4b9b516-8bab-431f-a718-d4b546f75053,Namespace:calico-system,Attempt:2,}" Sep 4 23:50:20.909768 containerd[1493]: time="2025-09-04T23:50:20.909744782Z" level=info msg="StopPodSandbox for \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\"" Sep 4 23:50:20.909867 containerd[1493]: time="2025-09-04T23:50:20.909847986Z" level=info msg="TearDown network for sandbox \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\" successfully" Sep 4 23:50:20.909904 containerd[1493]: time="2025-09-04T23:50:20.909865629Z" level=info msg="StopPodSandbox for \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\" returns successfully" Sep 4 23:50:20.910403 containerd[1493]: time="2025-09-04T23:50:20.910382623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-28pvw,Uid:a98fe27d-f9b2-46e3-a9a5-1f9fa577590e,Namespace:calico-system,Attempt:3,}" Sep 4 23:50:20.910543 kubelet[2748]: I0904 23:50:20.910458 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144" Sep 4 23:50:20.911650 containerd[1493]: time="2025-09-04T23:50:20.911615523Z" level=info msg="StopPodSandbox for \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\"" Sep 4 23:50:20.912109 containerd[1493]: time="2025-09-04T23:50:20.911887966Z" level=info msg="Ensure that sandbox 8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144 in task-service has been cleanup successfully" Sep 4 23:50:20.912210 containerd[1493]: time="2025-09-04T23:50:20.912175127Z" level=info msg="TearDown network for sandbox \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\" successfully" Sep 4 23:50:20.912210 containerd[1493]: time="2025-09-04T23:50:20.912201867Z" level=info msg="StopPodSandbox for \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\" returns successfully" Sep 4 23:50:20.912697 containerd[1493]: time="2025-09-04T23:50:20.912667564Z" level=info msg="StopPodSandbox for \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\"" Sep 4 23:50:20.912787 containerd[1493]: time="2025-09-04T23:50:20.912751642Z" level=info msg="TearDown network for sandbox \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\" successfully" Sep 4 23:50:20.912787 containerd[1493]: time="2025-09-04T23:50:20.912780607Z" level=info msg="StopPodSandbox for \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\" returns successfully" Sep 4 23:50:20.913057 kubelet[2748]: E0904 23:50:20.913027 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:20.913240 kubelet[2748]: I0904 23:50:20.913219 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63" Sep 4 23:50:20.913888 containerd[1493]: time="2025-09-04T23:50:20.913595481Z" level=info msg="StopPodSandbox for \"5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63\"" Sep 4 23:50:20.913888 containerd[1493]: time="2025-09-04T23:50:20.913621860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z7mb6,Uid:cfe8ce7b-a276-49b2-b396-cff3cb8bdc33,Namespace:kube-system,Attempt:2,}" Sep 4 23:50:20.913888 containerd[1493]: time="2025-09-04T23:50:20.913767705Z" level=info msg="Ensure that sandbox 5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63 in task-service has been cleanup successfully" Sep 4 23:50:20.914234 containerd[1493]: time="2025-09-04T23:50:20.914218092Z" level=info msg="TearDown network for sandbox \"5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63\" successfully" Sep 4 23:50:20.914343 containerd[1493]: time="2025-09-04T23:50:20.914329352Z" level=info msg="StopPodSandbox for \"5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63\" returns successfully" Sep 4 23:50:20.914950 containerd[1493]: time="2025-09-04T23:50:20.914933029Z" level=info msg="StopPodSandbox for \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\"" Sep 4 23:50:20.915244 containerd[1493]: time="2025-09-04T23:50:20.915201013Z" level=info msg="TearDown network for sandbox \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\" successfully" Sep 4 23:50:20.915244 containerd[1493]: time="2025-09-04T23:50:20.915232843Z" level=info msg="StopPodSandbox for \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\" returns successfully" Sep 4 23:50:20.915778 kubelet[2748]: I0904 23:50:20.915492 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de" Sep 4 23:50:20.915992 containerd[1493]: time="2025-09-04T23:50:20.915953729Z" level=info msg="StopPodSandbox for \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\"" Sep 4 23:50:20.916040 containerd[1493]: time="2025-09-04T23:50:20.916011148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d8krf,Uid:75d773f2-41cf-4b0b-ab63-23228f5501e2,Namespace:calico-system,Attempt:2,}" Sep 4 23:50:20.916191 containerd[1493]: time="2025-09-04T23:50:20.916163084Z" level=info msg="Ensure that sandbox 3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de in task-service has been cleanup successfully" Sep 4 23:50:20.916380 containerd[1493]: time="2025-09-04T23:50:20.916356047Z" level=info msg="TearDown network for sandbox \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\" successfully" Sep 4 23:50:20.916414 containerd[1493]: time="2025-09-04T23:50:20.916377557Z" level=info msg="StopPodSandbox for \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\" returns successfully" Sep 4 23:50:20.916796 containerd[1493]: time="2025-09-04T23:50:20.916662403Z" level=info msg="StopPodSandbox for \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\"" Sep 4 23:50:20.916796 containerd[1493]: time="2025-09-04T23:50:20.916757513Z" level=info msg="TearDown network for sandbox \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\" successfully" Sep 4 23:50:20.916796 containerd[1493]: time="2025-09-04T23:50:20.916770377Z" level=info msg="StopPodSandbox for \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\" returns successfully" Sep 4 23:50:20.917315 containerd[1493]: time="2025-09-04T23:50:20.917254278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c45fc5f4-77ks8,Uid:d64b33cd-4dd7-45f1-991e-5a59b10c693d,Namespace:calico-apiserver,Attempt:2,}" Sep 4 23:50:21.057538 containerd[1493]: time="2025-09-04T23:50:21.057477917Z" level=error msg="Failed to destroy network for sandbox \"28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:21.083900 containerd[1493]: time="2025-09-04T23:50:21.058249921Z" level=error msg="encountered an error cleaning up failed sandbox \"28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:21.083900 containerd[1493]: time="2025-09-04T23:50:21.058332467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q7s5n,Uid:72827434-065b-42a4-a2fc-ad0f0cccd362,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:21.084093 kubelet[2748]: E0904 23:50:21.058616 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:21.084093 kubelet[2748]: E0904 23:50:21.058714 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q7s5n" Sep 4 23:50:21.084093 kubelet[2748]: E0904 23:50:21.058745 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q7s5n" Sep 4 23:50:21.083949 systemd[1]: run-netns-cni\x2dce0ffc24\x2d9209\x2d615c\x2d2537\x2dd1ce9fea4402.mount: Deactivated successfully. Sep 4 23:50:21.084386 kubelet[2748]: E0904 23:50:21.058810 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-q7s5n_kube-system(72827434-065b-42a4-a2fc-ad0f0cccd362)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-q7s5n_kube-system(72827434-065b-42a4-a2fc-ad0f0cccd362)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-q7s5n" podUID="72827434-065b-42a4-a2fc-ad0f0cccd362" Sep 4 23:50:21.086376 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144-shm.mount: Deactivated successfully. Sep 4 23:50:21.086498 systemd[1]: run-netns-cni\x2d4a482d99\x2dbcbf\x2d2a68\x2d1dde\x2deb73696be525.mount: Deactivated successfully. Sep 4 23:50:21.086597 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8-shm.mount: Deactivated successfully. Sep 4 23:50:21.086696 systemd[1]: run-netns-cni\x2d1d4f4df5\x2df008\x2d02cb\x2d56b5\x2dfcc315ce19b4.mount: Deactivated successfully. Sep 4 23:50:21.086793 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561-shm.mount: Deactivated successfully. Sep 4 23:50:21.086915 systemd[1]: run-netns-cni\x2df308b473\x2d0670\x2d0818\x2d8685\x2dbcb014020105.mount: Deactivated successfully. Sep 4 23:50:21.476291 containerd[1493]: time="2025-09-04T23:50:21.476228765Z" level=error msg="Failed to destroy network for sandbox \"ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:21.477736 containerd[1493]: time="2025-09-04T23:50:21.476751299Z" level=error msg="encountered an error cleaning up failed sandbox \"ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:21.477736 containerd[1493]: time="2025-09-04T23:50:21.476828634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fcb7bdb4-fjbtd,Uid:53444f07-cee6-411d-9fba-8985320ac23a,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:21.477955 kubelet[2748]: E0904 23:50:21.477184 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:21.477955 kubelet[2748]: E0904 23:50:21.477271 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9fcb7bdb4-fjbtd" Sep 4 23:50:21.477955 kubelet[2748]: E0904 23:50:21.477302 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9fcb7bdb4-fjbtd" Sep 4 23:50:21.480067 kubelet[2748]: E0904 23:50:21.477377 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9fcb7bdb4-fjbtd_calico-system(53444f07-cee6-411d-9fba-8985320ac23a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9fcb7bdb4-fjbtd_calico-system(53444f07-cee6-411d-9fba-8985320ac23a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9fcb7bdb4-fjbtd" podUID="53444f07-cee6-411d-9fba-8985320ac23a" Sep 4 23:50:21.480944 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83-shm.mount: Deactivated successfully. Sep 4 23:50:21.821484 systemd[1]: Started sshd@7-10.0.0.62:22-10.0.0.1:55494.service - OpenSSH per-connection server daemon (10.0.0.1:55494). Sep 4 23:50:21.873443 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 55494 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:50:21.875518 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:21.881947 systemd-logind[1475]: New session 8 of user core. Sep 4 23:50:21.886662 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 23:50:21.922002 kubelet[2748]: I0904 23:50:21.921945 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de" Sep 4 23:50:21.923045 containerd[1493]: time="2025-09-04T23:50:21.922576301Z" level=info msg="StopPodSandbox for \"28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de\"" Sep 4 23:50:21.923045 containerd[1493]: time="2025-09-04T23:50:21.922841991Z" level=info msg="Ensure that sandbox 28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de in task-service has been cleanup successfully" Sep 4 23:50:21.923413 containerd[1493]: time="2025-09-04T23:50:21.923389421Z" level=info msg="TearDown network for sandbox \"28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de\" successfully" Sep 4 23:50:21.923502 containerd[1493]: time="2025-09-04T23:50:21.923482587Z" level=info msg="StopPodSandbox for \"28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de\" returns successfully" Sep 4 23:50:21.924636 containerd[1493]: time="2025-09-04T23:50:21.924591544Z" level=info msg="StopPodSandbox for \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\"" Sep 4 23:50:21.924791 containerd[1493]: time="2025-09-04T23:50:21.924727851Z" level=info msg="TearDown network for sandbox \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\" successfully" Sep 4 23:50:21.924791 containerd[1493]: time="2025-09-04T23:50:21.924782965Z" level=info msg="StopPodSandbox for \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\" returns successfully" Sep 4 23:50:21.925159 kubelet[2748]: E0904 23:50:21.925096 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:21.926204 containerd[1493]: time="2025-09-04T23:50:21.926179754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q7s5n,Uid:72827434-065b-42a4-a2fc-ad0f0cccd362,Namespace:kube-system,Attempt:2,}" Sep 4 23:50:21.926773 systemd[1]: run-netns-cni\x2dfe5c9e29\x2d0981\x2d1b83\x2d7b97\x2d50938f54779f.mount: Deactivated successfully. Sep 4 23:50:21.928596 kubelet[2748]: I0904 23:50:21.928282 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83" Sep 4 23:50:21.929249 containerd[1493]: time="2025-09-04T23:50:21.929215880Z" level=info msg="StopPodSandbox for \"ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83\"" Sep 4 23:50:21.930161 containerd[1493]: time="2025-09-04T23:50:21.930103871Z" level=info msg="Ensure that sandbox ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83 in task-service has been cleanup successfully" Sep 4 23:50:21.930514 containerd[1493]: time="2025-09-04T23:50:21.930450153Z" level=info msg="TearDown network for sandbox \"ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83\" successfully" Sep 4 23:50:21.930514 containerd[1493]: time="2025-09-04T23:50:21.930469900Z" level=info msg="StopPodSandbox for \"ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83\" returns successfully" Sep 4 23:50:21.933155 systemd[1]: run-netns-cni\x2dee007245\x2d2973\x2d5857\x2d2941\x2db5582e86c369.mount: Deactivated successfully. Sep 4 23:50:21.933732 containerd[1493]: time="2025-09-04T23:50:21.933697946Z" level=info msg="StopPodSandbox for \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\"" Sep 4 23:50:21.933844 containerd[1493]: time="2025-09-04T23:50:21.933825897Z" level=info msg="TearDown network for sandbox \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\" successfully" Sep 4 23:50:21.933844 containerd[1493]: time="2025-09-04T23:50:21.933841637Z" level=info msg="StopPodSandbox for \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\" returns successfully" Sep 4 23:50:21.934420 containerd[1493]: time="2025-09-04T23:50:21.934342871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fcb7bdb4-fjbtd,Uid:53444f07-cee6-411d-9fba-8985320ac23a,Namespace:calico-system,Attempt:2,}" Sep 4 23:50:22.198172 sshd[4151]: Connection closed by 10.0.0.1 port 55494 Sep 4 23:50:22.197977 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:22.201533 systemd[1]: sshd@7-10.0.0.62:22-10.0.0.1:55494.service: Deactivated successfully. Sep 4 23:50:22.205565 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 23:50:22.214412 systemd-logind[1475]: Session 8 logged out. Waiting for processes to exit. Sep 4 23:50:22.216156 systemd-logind[1475]: Removed session 8. Sep 4 23:50:23.176401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1745734633.mount: Deactivated successfully. Sep 4 23:50:26.151396 kubelet[2748]: E0904 23:50:26.151326 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:27.216324 systemd[1]: Started sshd@8-10.0.0.62:22-10.0.0.1:55510.service - OpenSSH per-connection server daemon (10.0.0.1:55510). Sep 4 23:50:27.320012 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 55510 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:50:27.321903 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:27.328043 systemd-logind[1475]: New session 9 of user core. Sep 4 23:50:27.337312 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 23:50:27.601961 sshd[4169]: Connection closed by 10.0.0.1 port 55510 Sep 4 23:50:27.602823 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:27.608202 systemd[1]: sshd@8-10.0.0.62:22-10.0.0.1:55510.service: Deactivated successfully. Sep 4 23:50:27.610330 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 23:50:27.611129 systemd-logind[1475]: Session 9 logged out. Waiting for processes to exit. Sep 4 23:50:27.612112 systemd-logind[1475]: Removed session 9. Sep 4 23:50:30.817176 containerd[1493]: time="2025-09-04T23:50:30.817062533Z" level=error msg="Failed to destroy network for sandbox \"f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:30.817885 containerd[1493]: time="2025-09-04T23:50:30.817585698Z" level=error msg="encountered an error cleaning up failed sandbox \"f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:30.817885 containerd[1493]: time="2025-09-04T23:50:30.817655492Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c45fc5f4-726pc,Uid:24110c1b-5cfd-4ff0-b414-4604e0a02acc,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:30.818018 kubelet[2748]: E0904 23:50:30.817939 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:30.818440 kubelet[2748]: E0904 23:50:30.818016 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c45fc5f4-726pc" Sep 4 23:50:30.818440 kubelet[2748]: E0904 23:50:30.818045 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c45fc5f4-726pc" Sep 4 23:50:30.818440 kubelet[2748]: E0904 23:50:30.818103 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78c45fc5f4-726pc_calico-apiserver(24110c1b-5cfd-4ff0-b414-4604e0a02acc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78c45fc5f4-726pc_calico-apiserver(24110c1b-5cfd-4ff0-b414-4604e0a02acc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78c45fc5f4-726pc" podUID="24110c1b-5cfd-4ff0-b414-4604e0a02acc" Sep 4 23:50:30.819905 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814-shm.mount: Deactivated successfully. Sep 4 23:50:30.956728 kubelet[2748]: I0904 23:50:30.956663 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814" Sep 4 23:50:30.957299 containerd[1493]: time="2025-09-04T23:50:30.957245591Z" level=info msg="StopPodSandbox for \"f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814\"" Sep 4 23:50:30.957533 containerd[1493]: time="2025-09-04T23:50:30.957504428Z" level=info msg="Ensure that sandbox f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814 in task-service has been cleanup successfully" Sep 4 23:50:30.960147 containerd[1493]: time="2025-09-04T23:50:30.957741544Z" level=info msg="TearDown network for sandbox \"f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814\" successfully" Sep 4 23:50:30.960147 containerd[1493]: time="2025-09-04T23:50:30.957759748Z" level=info msg="StopPodSandbox for \"f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814\" returns successfully" Sep 4 23:50:30.960147 containerd[1493]: time="2025-09-04T23:50:30.958055817Z" level=info msg="StopPodSandbox for \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\"" Sep 4 23:50:30.960147 containerd[1493]: time="2025-09-04T23:50:30.958158765Z" level=info msg="TearDown network for sandbox \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\" successfully" Sep 4 23:50:30.960147 containerd[1493]: time="2025-09-04T23:50:30.958168343Z" level=info msg="StopPodSandbox for \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\" returns successfully" Sep 4 23:50:30.960147 containerd[1493]: time="2025-09-04T23:50:30.958442450Z" level=info msg="StopPodSandbox for \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\"" Sep 4 23:50:30.960147 containerd[1493]: time="2025-09-04T23:50:30.958603470Z" level=info msg="TearDown network for sandbox \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\" successfully" Sep 4 23:50:30.960147 containerd[1493]: time="2025-09-04T23:50:30.958613729Z" level=info msg="StopPodSandbox for \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\" returns successfully" Sep 4 23:50:30.960147 containerd[1493]: time="2025-09-04T23:50:30.959061089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c45fc5f4-726pc,Uid:24110c1b-5cfd-4ff0-b414-4604e0a02acc,Namespace:calico-apiserver,Attempt:3,}" Sep 4 23:50:30.961082 systemd[1]: run-netns-cni\x2d0a582a86\x2de37d\x2dd827\x2d6ae7\x2d3a66130274b2.mount: Deactivated successfully. Sep 4 23:50:31.151792 kubelet[2748]: E0904 23:50:31.151634 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:31.348444 containerd[1493]: time="2025-09-04T23:50:31.348351626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:31.604532 containerd[1493]: time="2025-09-04T23:50:31.604425219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 4 23:50:31.621525 containerd[1493]: time="2025-09-04T23:50:31.621429239Z" level=error msg="Failed to destroy network for sandbox \"fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:31.622082 containerd[1493]: time="2025-09-04T23:50:31.622029772Z" level=error msg="encountered an error cleaning up failed sandbox \"fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:31.622217 containerd[1493]: time="2025-09-04T23:50:31.622109976Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-754df77889-htb2w,Uid:c4b9b516-8bab-431f-a718-d4b546f75053,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:31.622829 kubelet[2748]: E0904 23:50:31.622562 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:31.622829 kubelet[2748]: E0904 23:50:31.622673 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-754df77889-htb2w" Sep 4 23:50:31.622829 kubelet[2748]: E0904 23:50:31.622703 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-754df77889-htb2w" Sep 4 23:50:31.622970 kubelet[2748]: E0904 23:50:31.622785 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-754df77889-htb2w_calico-system(c4b9b516-8bab-431f-a718-d4b546f75053)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-754df77889-htb2w_calico-system(c4b9b516-8bab-431f-a718-d4b546f75053)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-754df77889-htb2w" podUID="c4b9b516-8bab-431f-a718-d4b546f75053" Sep 4 23:50:31.697087 containerd[1493]: time="2025-09-04T23:50:31.697014716Z" level=error msg="Failed to destroy network for sandbox \"9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:31.697550 containerd[1493]: time="2025-09-04T23:50:31.697507713Z" level=error msg="encountered an error cleaning up failed sandbox \"9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:31.697618 containerd[1493]: time="2025-09-04T23:50:31.697597576Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c45fc5f4-77ks8,Uid:d64b33cd-4dd7-45f1-991e-5a59b10c693d,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:31.697960 kubelet[2748]: E0904 23:50:31.697912 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:31.698066 kubelet[2748]: E0904 23:50:31.697992 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c45fc5f4-77ks8" Sep 4 23:50:31.698066 kubelet[2748]: E0904 23:50:31.698022 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c45fc5f4-77ks8" Sep 4 23:50:31.698216 kubelet[2748]: E0904 23:50:31.698172 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78c45fc5f4-77ks8_calico-apiserver(d64b33cd-4dd7-45f1-991e-5a59b10c693d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78c45fc5f4-77ks8_calico-apiserver(d64b33cd-4dd7-45f1-991e-5a59b10c693d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78c45fc5f4-77ks8" podUID="d64b33cd-4dd7-45f1-991e-5a59b10c693d" Sep 4 23:50:31.773049 containerd[1493]: time="2025-09-04T23:50:31.772965426Z" level=error msg="Failed to destroy network for sandbox \"d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:31.773519 containerd[1493]: time="2025-09-04T23:50:31.773472820Z" level=error msg="encountered an error cleaning up failed sandbox \"d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:31.773576 containerd[1493]: time="2025-09-04T23:50:31.773560108Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-28pvw,Uid:a98fe27d-f9b2-46e3-a9a5-1f9fa577590e,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:31.773876 kubelet[2748]: E0904 23:50:31.773832 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:31.773963 kubelet[2748]: E0904 23:50:31.773910 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-28pvw" Sep 4 23:50:31.773963 kubelet[2748]: E0904 23:50:31.773935 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-28pvw" Sep 4 23:50:31.774027 kubelet[2748]: E0904 23:50:31.774000 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-28pvw_calico-system(a98fe27d-f9b2-46e3-a9a5-1f9fa577590e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-28pvw_calico-system(a98fe27d-f9b2-46e3-a9a5-1f9fa577590e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-28pvw" podUID="a98fe27d-f9b2-46e3-a9a5-1f9fa577590e" Sep 4 23:50:31.866614 containerd[1493]: time="2025-09-04T23:50:31.866388925Z" level=error msg="Failed to destroy network for sandbox \"29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:31.867385 containerd[1493]: time="2025-09-04T23:50:31.866863136Z" level=error msg="encountered an error cleaning up failed sandbox \"29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:31.867385 containerd[1493]: time="2025-09-04T23:50:31.866927941Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z7mb6,Uid:cfe8ce7b-a276-49b2-b396-cff3cb8bdc33,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:31.867483 kubelet[2748]: E0904 23:50:31.867211 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:31.867483 kubelet[2748]: E0904 23:50:31.867283 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z7mb6" Sep 4 23:50:31.867483 kubelet[2748]: E0904 23:50:31.867306 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z7mb6" Sep 4 23:50:31.959338 kubelet[2748]: E0904 23:50:31.867358 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-z7mb6_kube-system(cfe8ce7b-a276-49b2-b396-cff3cb8bdc33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-z7mb6_kube-system(cfe8ce7b-a276-49b2-b396-cff3cb8bdc33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-z7mb6" podUID="cfe8ce7b-a276-49b2-b396-cff3cb8bdc33" Sep 4 23:50:31.960805 kubelet[2748]: I0904 23:50:31.960780 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d" Sep 4 23:50:31.961374 containerd[1493]: time="2025-09-04T23:50:31.961328336Z" level=info msg="StopPodSandbox for \"9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d\"" Sep 4 23:50:31.961627 containerd[1493]: time="2025-09-04T23:50:31.961601751Z" level=info msg="Ensure that sandbox 9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d in task-service has been cleanup successfully" Sep 4 23:50:31.961862 containerd[1493]: time="2025-09-04T23:50:31.961841592Z" level=info msg="TearDown network for sandbox \"9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d\" successfully" Sep 4 23:50:31.961946 containerd[1493]: time="2025-09-04T23:50:31.961863223Z" level=info msg="StopPodSandbox for \"9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d\" returns successfully" Sep 4 23:50:31.962201 containerd[1493]: time="2025-09-04T23:50:31.962159383Z" level=info msg="StopPodSandbox for \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\"" Sep 4 23:50:31.962201 containerd[1493]: time="2025-09-04T23:50:31.962249105Z" level=info msg="TearDown network for sandbox \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\" successfully" Sep 4 23:50:31.962201 containerd[1493]: time="2025-09-04T23:50:31.962259115Z" level=info msg="StopPodSandbox for \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\" returns successfully" Sep 4 23:50:31.962537 kubelet[2748]: I0904 23:50:31.962462 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc" Sep 4 23:50:31.962791 containerd[1493]: time="2025-09-04T23:50:31.962707285Z" level=info msg="StopPodSandbox for \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\"" Sep 4 23:50:31.962847 containerd[1493]: time="2025-09-04T23:50:31.962808359Z" level=info msg="TearDown network for sandbox \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\" successfully" Sep 4 23:50:31.962847 containerd[1493]: time="2025-09-04T23:50:31.962823718Z" level=info msg="StopPodSandbox for \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\" returns successfully" Sep 4 23:50:31.963198 containerd[1493]: time="2025-09-04T23:50:31.963173570Z" level=info msg="StopPodSandbox for \"fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc\"" Sep 4 23:50:31.963348 containerd[1493]: time="2025-09-04T23:50:31.963324550Z" level=info msg="Ensure that sandbox fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc in task-service has been cleanup successfully" Sep 4 23:50:31.963434 containerd[1493]: time="2025-09-04T23:50:31.963398192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c45fc5f4-77ks8,Uid:d64b33cd-4dd7-45f1-991e-5a59b10c693d,Namespace:calico-apiserver,Attempt:3,}" Sep 4 23:50:31.963600 containerd[1493]: time="2025-09-04T23:50:31.963568790Z" level=info msg="TearDown network for sandbox \"fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc\" successfully" Sep 4 23:50:31.963600 containerd[1493]: time="2025-09-04T23:50:31.963590892Z" level=info msg="StopPodSandbox for \"fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc\" returns successfully" Sep 4 23:50:31.963990 containerd[1493]: time="2025-09-04T23:50:31.963955843Z" level=info msg="StopPodSandbox for \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\"" Sep 4 23:50:31.964091 containerd[1493]: time="2025-09-04T23:50:31.964055094Z" level=info msg="TearDown network for sandbox \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\" successfully" Sep 4 23:50:31.964091 containerd[1493]: time="2025-09-04T23:50:31.964074541Z" level=info msg="StopPodSandbox for \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\" returns successfully" Sep 4 23:50:31.964609 containerd[1493]: time="2025-09-04T23:50:31.964564582Z" level=info msg="StopPodSandbox for \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\"" Sep 4 23:50:31.964685 containerd[1493]: time="2025-09-04T23:50:31.964664935Z" level=info msg="TearDown network for sandbox \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\" successfully" Sep 4 23:50:31.964685 containerd[1493]: time="2025-09-04T23:50:31.964678951Z" level=info msg="StopPodSandbox for \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\" returns successfully" Sep 4 23:50:31.965308 containerd[1493]: time="2025-09-04T23:50:31.965270469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-754df77889-htb2w,Uid:c4b9b516-8bab-431f-a718-d4b546f75053,Namespace:calico-system,Attempt:3,}" Sep 4 23:50:31.965950 kubelet[2748]: I0904 23:50:31.965722 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464" Sep 4 23:50:31.966370 containerd[1493]: time="2025-09-04T23:50:31.966335904Z" level=info msg="StopPodSandbox for \"d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464\"" Sep 4 23:50:31.966583 containerd[1493]: time="2025-09-04T23:50:31.966563963Z" level=info msg="Ensure that sandbox d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464 in task-service has been cleanup successfully" Sep 4 23:50:31.966807 containerd[1493]: time="2025-09-04T23:50:31.966763776Z" level=info msg="TearDown network for sandbox \"d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464\" successfully" Sep 4 23:50:31.966807 containerd[1493]: time="2025-09-04T23:50:31.966782512Z" level=info msg="StopPodSandbox for \"d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464\" returns successfully" Sep 4 23:50:31.967115 containerd[1493]: time="2025-09-04T23:50:31.967090384Z" level=info msg="StopPodSandbox for \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\"" Sep 4 23:50:31.967257 containerd[1493]: time="2025-09-04T23:50:31.967232226Z" level=info msg="TearDown network for sandbox \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\" successfully" Sep 4 23:50:31.967257 containerd[1493]: time="2025-09-04T23:50:31.967249158Z" level=info msg="StopPodSandbox for \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\" returns successfully" Sep 4 23:50:31.967501 containerd[1493]: time="2025-09-04T23:50:31.967480623Z" level=info msg="StopPodSandbox for \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\"" Sep 4 23:50:31.967542 kubelet[2748]: I0904 23:50:31.967486 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7" Sep 4 23:50:31.967605 containerd[1493]: time="2025-09-04T23:50:31.967551139Z" level=info msg="TearDown network for sandbox \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\" successfully" Sep 4 23:50:31.967605 containerd[1493]: time="2025-09-04T23:50:31.967560917Z" level=info msg="StopPodSandbox for \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\" returns successfully" Sep 4 23:50:31.968058 containerd[1493]: time="2025-09-04T23:50:31.968021181Z" level=info msg="StopPodSandbox for \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\"" Sep 4 23:50:31.968058 containerd[1493]: time="2025-09-04T23:50:31.968046390Z" level=info msg="StopPodSandbox for \"29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7\"" Sep 4 23:50:31.968267 containerd[1493]: time="2025-09-04T23:50:31.968156812Z" level=info msg="TearDown network for sandbox \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\" successfully" Sep 4 23:50:31.968267 containerd[1493]: time="2025-09-04T23:50:31.968168895Z" level=info msg="StopPodSandbox for \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\" returns successfully" Sep 4 23:50:31.968326 containerd[1493]: time="2025-09-04T23:50:31.968307922Z" level=info msg="Ensure that sandbox 29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7 in task-service has been cleanup successfully" Sep 4 23:50:31.968531 containerd[1493]: time="2025-09-04T23:50:31.968511413Z" level=info msg="TearDown network for sandbox \"29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7\" successfully" Sep 4 23:50:31.968583 containerd[1493]: time="2025-09-04T23:50:31.968529187Z" level=info msg="StopPodSandbox for \"29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7\" returns successfully" Sep 4 23:50:31.968608 containerd[1493]: time="2025-09-04T23:50:31.968538335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-28pvw,Uid:a98fe27d-f9b2-46e3-a9a5-1f9fa577590e,Namespace:calico-system,Attempt:4,}" Sep 4 23:50:31.968835 containerd[1493]: time="2025-09-04T23:50:31.968807342Z" level=info msg="StopPodSandbox for \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\"" Sep 4 23:50:31.968930 containerd[1493]: time="2025-09-04T23:50:31.968911822Z" level=info msg="TearDown network for sandbox \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\" successfully" Sep 4 23:50:31.968930 containerd[1493]: time="2025-09-04T23:50:31.968927582Z" level=info msg="StopPodSandbox for \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\" returns successfully" Sep 4 23:50:31.969190 containerd[1493]: time="2025-09-04T23:50:31.969168485Z" level=info msg="StopPodSandbox for \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\"" Sep 4 23:50:31.969275 containerd[1493]: time="2025-09-04T23:50:31.969251575Z" level=info msg="TearDown network for sandbox \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\" successfully" Sep 4 23:50:31.969275 containerd[1493]: time="2025-09-04T23:50:31.969265863Z" level=info msg="StopPodSandbox for \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\" returns successfully" Sep 4 23:50:31.969472 kubelet[2748]: E0904 23:50:31.969451 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:31.969710 containerd[1493]: time="2025-09-04T23:50:31.969687101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z7mb6,Uid:cfe8ce7b-a276-49b2-b396-cff3cb8bdc33,Namespace:kube-system,Attempt:3,}" Sep 4 23:50:32.218894 containerd[1493]: time="2025-09-04T23:50:32.218840746Z" level=error msg="Failed to destroy network for sandbox \"adb3d1ad720be189a69fd743c12a21d3582121aad430227e8c8434af0a2cc0c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:32.219298 containerd[1493]: time="2025-09-04T23:50:32.219270451Z" level=error msg="encountered an error cleaning up failed sandbox \"adb3d1ad720be189a69fd743c12a21d3582121aad430227e8c8434af0a2cc0c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:32.219352 containerd[1493]: time="2025-09-04T23:50:32.219329404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d8krf,Uid:75d773f2-41cf-4b0b-ab63-23228f5501e2,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"adb3d1ad720be189a69fd743c12a21d3582121aad430227e8c8434af0a2cc0c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:32.219622 kubelet[2748]: E0904 23:50:32.219573 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb3d1ad720be189a69fd743c12a21d3582121aad430227e8c8434af0a2cc0c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:32.219672 kubelet[2748]: E0904 23:50:32.219650 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb3d1ad720be189a69fd743c12a21d3582121aad430227e8c8434af0a2cc0c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-d8krf" Sep 4 23:50:32.219700 kubelet[2748]: E0904 23:50:32.219682 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb3d1ad720be189a69fd743c12a21d3582121aad430227e8c8434af0a2cc0c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-d8krf" Sep 4 23:50:32.219766 kubelet[2748]: E0904 23:50:32.219739 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-d8krf_calico-system(75d773f2-41cf-4b0b-ab63-23228f5501e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-d8krf_calico-system(75d773f2-41cf-4b0b-ab63-23228f5501e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adb3d1ad720be189a69fd743c12a21d3582121aad430227e8c8434af0a2cc0c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-d8krf" podUID="75d773f2-41cf-4b0b-ab63-23228f5501e2" Sep 4 23:50:32.251407 systemd[1]: run-netns-cni\x2deba88177\x2d07a7\x2ddbd6\x2d97c9\x2d74386e6f8460.mount: Deactivated successfully. Sep 4 23:50:32.251518 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464-shm.mount: Deactivated successfully. Sep 4 23:50:32.251597 systemd[1]: run-netns-cni\x2df0f60679\x2db86a\x2d5e74\x2d6d19\x2d2b06ea99e508.mount: Deactivated successfully. Sep 4 23:50:32.251672 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d-shm.mount: Deactivated successfully. Sep 4 23:50:32.251752 systemd[1]: run-netns-cni\x2de6589175\x2d6729\x2d8a6b\x2d065a\x2da0b92b04ad33.mount: Deactivated successfully. Sep 4 23:50:32.251829 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc-shm.mount: Deactivated successfully. Sep 4 23:50:32.367388 containerd[1493]: time="2025-09-04T23:50:32.367326404Z" level=error msg="Failed to destroy network for sandbox \"94a73c1404240a627b007234a5c7fb472280406044f84368e9b1453d6b4b4455\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:32.367842 containerd[1493]: time="2025-09-04T23:50:32.367809221Z" level=error msg="encountered an error cleaning up failed sandbox \"94a73c1404240a627b007234a5c7fb472280406044f84368e9b1453d6b4b4455\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:32.367902 containerd[1493]: time="2025-09-04T23:50:32.367881500Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q7s5n,Uid:72827434-065b-42a4-a2fc-ad0f0cccd362,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"94a73c1404240a627b007234a5c7fb472280406044f84368e9b1453d6b4b4455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:32.368466 kubelet[2748]: E0904 23:50:32.368192 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94a73c1404240a627b007234a5c7fb472280406044f84368e9b1453d6b4b4455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:32.368466 kubelet[2748]: E0904 23:50:32.368279 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94a73c1404240a627b007234a5c7fb472280406044f84368e9b1453d6b4b4455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q7s5n" Sep 4 23:50:32.368466 kubelet[2748]: E0904 23:50:32.368322 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94a73c1404240a627b007234a5c7fb472280406044f84368e9b1453d6b4b4455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q7s5n" Sep 4 23:50:32.368605 kubelet[2748]: E0904 23:50:32.368400 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-q7s5n_kube-system(72827434-065b-42a4-a2fc-ad0f0cccd362)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-q7s5n_kube-system(72827434-065b-42a4-a2fc-ad0f0cccd362)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94a73c1404240a627b007234a5c7fb472280406044f84368e9b1453d6b4b4455\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-q7s5n" podUID="72827434-065b-42a4-a2fc-ad0f0cccd362" Sep 4 23:50:32.370024 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94a73c1404240a627b007234a5c7fb472280406044f84368e9b1453d6b4b4455-shm.mount: Deactivated successfully. Sep 4 23:50:32.384616 containerd[1493]: time="2025-09-04T23:50:32.384556930Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:32.502174 containerd[1493]: time="2025-09-04T23:50:32.501994469Z" level=error msg="Failed to destroy network for sandbox \"eec079f8379a15ab7a87fe4181f231efd532b0b64310b84f4ee6dd830ac96453\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:32.502600 containerd[1493]: time="2025-09-04T23:50:32.502545047Z" level=error msg="encountered an error cleaning up failed sandbox \"eec079f8379a15ab7a87fe4181f231efd532b0b64310b84f4ee6dd830ac96453\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:32.502670 containerd[1493]: time="2025-09-04T23:50:32.502619681Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fcb7bdb4-fjbtd,Uid:53444f07-cee6-411d-9fba-8985320ac23a,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"eec079f8379a15ab7a87fe4181f231efd532b0b64310b84f4ee6dd830ac96453\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:32.503005 kubelet[2748]: E0904 23:50:32.502935 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eec079f8379a15ab7a87fe4181f231efd532b0b64310b84f4ee6dd830ac96453\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:32.503176 kubelet[2748]: E0904 23:50:32.503029 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eec079f8379a15ab7a87fe4181f231efd532b0b64310b84f4ee6dd830ac96453\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9fcb7bdb4-fjbtd" Sep 4 23:50:32.503176 kubelet[2748]: E0904 23:50:32.503059 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eec079f8379a15ab7a87fe4181f231efd532b0b64310b84f4ee6dd830ac96453\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9fcb7bdb4-fjbtd" Sep 4 23:50:32.503300 kubelet[2748]: E0904 23:50:32.503249 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9fcb7bdb4-fjbtd_calico-system(53444f07-cee6-411d-9fba-8985320ac23a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9fcb7bdb4-fjbtd_calico-system(53444f07-cee6-411d-9fba-8985320ac23a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eec079f8379a15ab7a87fe4181f231efd532b0b64310b84f4ee6dd830ac96453\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9fcb7bdb4-fjbtd" podUID="53444f07-cee6-411d-9fba-8985320ac23a" Sep 4 23:50:32.506247 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eec079f8379a15ab7a87fe4181f231efd532b0b64310b84f4ee6dd830ac96453-shm.mount: Deactivated successfully. Sep 4 23:50:32.625468 systemd[1]: Started sshd@9-10.0.0.62:22-10.0.0.1:38084.service - OpenSSH per-connection server daemon (10.0.0.1:38084). Sep 4 23:50:32.658070 containerd[1493]: time="2025-09-04T23:50:32.657912578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:32.660981 containerd[1493]: time="2025-09-04T23:50:32.660872692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 28.838864639s" Sep 4 23:50:32.660981 containerd[1493]: time="2025-09-04T23:50:32.660941936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 4 23:50:32.691343 containerd[1493]: time="2025-09-04T23:50:32.691278527Z" level=info msg="CreateContainer within sandbox \"d73c3ad910f60ce2bef3e3930982250d2e4d9b49dce9af6e0674e95007c0c1c5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 23:50:32.694999 sshd[4443]: Accepted publickey for core from 10.0.0.1 port 38084 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:50:32.698956 sshd-session[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:32.711661 systemd-logind[1475]: New session 10 of user core. Sep 4 23:50:32.724114 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 23:50:32.765449 containerd[1493]: time="2025-09-04T23:50:32.765283500Z" level=error msg="Failed to destroy network for sandbox \"88e832c82497569883a33eb8c2fa7ca6e301915436548a04ba2f8ccf06500f70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:32.765970 containerd[1493]: time="2025-09-04T23:50:32.765774372Z" level=error msg="encountered an error cleaning up failed sandbox \"88e832c82497569883a33eb8c2fa7ca6e301915436548a04ba2f8ccf06500f70\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:32.765970 containerd[1493]: time="2025-09-04T23:50:32.765856240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c45fc5f4-726pc,Uid:24110c1b-5cfd-4ff0-b414-4604e0a02acc,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"88e832c82497569883a33eb8c2fa7ca6e301915436548a04ba2f8ccf06500f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:32.767097 kubelet[2748]: E0904 23:50:32.766463 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88e832c82497569883a33eb8c2fa7ca6e301915436548a04ba2f8ccf06500f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:32.767097 kubelet[2748]: E0904 23:50:32.766700 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88e832c82497569883a33eb8c2fa7ca6e301915436548a04ba2f8ccf06500f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c45fc5f4-726pc" Sep 4 23:50:32.768015 kubelet[2748]: E0904 23:50:32.767961 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88e832c82497569883a33eb8c2fa7ca6e301915436548a04ba2f8ccf06500f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c45fc5f4-726pc" Sep 4 23:50:32.768254 kubelet[2748]: E0904 23:50:32.768046 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78c45fc5f4-726pc_calico-apiserver(24110c1b-5cfd-4ff0-b414-4604e0a02acc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78c45fc5f4-726pc_calico-apiserver(24110c1b-5cfd-4ff0-b414-4604e0a02acc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88e832c82497569883a33eb8c2fa7ca6e301915436548a04ba2f8ccf06500f70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78c45fc5f4-726pc" podUID="24110c1b-5cfd-4ff0-b414-4604e0a02acc" Sep 4 23:50:32.919984 sshd[4460]: Connection closed by 10.0.0.1 port 38084 Sep 4 23:50:32.920449 sshd-session[4443]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:32.924871 systemd[1]: sshd@9-10.0.0.62:22-10.0.0.1:38084.service: Deactivated successfully. Sep 4 23:50:32.927574 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 23:50:32.928614 systemd-logind[1475]: Session 10 logged out. Waiting for processes to exit. Sep 4 23:50:32.930429 systemd-logind[1475]: Removed session 10. Sep 4 23:50:32.971091 kubelet[2748]: I0904 23:50:32.971056 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adb3d1ad720be189a69fd743c12a21d3582121aad430227e8c8434af0a2cc0c0" Sep 4 23:50:32.971775 containerd[1493]: time="2025-09-04T23:50:32.971687200Z" level=info msg="StopPodSandbox for \"adb3d1ad720be189a69fd743c12a21d3582121aad430227e8c8434af0a2cc0c0\"" Sep 4 23:50:32.972029 containerd[1493]: time="2025-09-04T23:50:32.971975814Z" level=info msg="Ensure that sandbox adb3d1ad720be189a69fd743c12a21d3582121aad430227e8c8434af0a2cc0c0 in task-service has been cleanup successfully" Sep 4 23:50:32.972305 containerd[1493]: time="2025-09-04T23:50:32.972272474Z" level=info msg="TearDown network for sandbox \"adb3d1ad720be189a69fd743c12a21d3582121aad430227e8c8434af0a2cc0c0\" successfully" Sep 4 23:50:32.972538 containerd[1493]: time="2025-09-04T23:50:32.972339462Z" level=info msg="StopPodSandbox for \"adb3d1ad720be189a69fd743c12a21d3582121aad430227e8c8434af0a2cc0c0\" returns successfully" Sep 4 23:50:32.972757 containerd[1493]: time="2025-09-04T23:50:32.972720565Z" level=info msg="StopPodSandbox for \"5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63\"" Sep 4 23:50:32.972839 containerd[1493]: time="2025-09-04T23:50:32.972822199Z" level=info msg="TearDown network for sandbox \"5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63\" successfully" Sep 4 23:50:32.972839 containerd[1493]: time="2025-09-04T23:50:32.972835225Z" level=info msg="StopPodSandbox for \"5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63\" returns successfully" Sep 4 23:50:32.973107 containerd[1493]: time="2025-09-04T23:50:32.973070557Z" level=info msg="StopPodSandbox for \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\"" Sep 4 23:50:32.973195 kubelet[2748]: I0904 23:50:32.973088 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eec079f8379a15ab7a87fe4181f231efd532b0b64310b84f4ee6dd830ac96453" Sep 4 23:50:32.973268 containerd[1493]: time="2025-09-04T23:50:32.973249320Z" level=info msg="TearDown network for sandbox \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\" successfully" Sep 4 23:50:32.973298 containerd[1493]: time="2025-09-04T23:50:32.973267594Z" level=info msg="StopPodSandbox for \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\" returns successfully" Sep 4 23:50:32.973487 containerd[1493]: time="2025-09-04T23:50:32.973466877Z" level=info msg="StopPodSandbox for \"eec079f8379a15ab7a87fe4181f231efd532b0b64310b84f4ee6dd830ac96453\"" Sep 4 23:50:32.973716 containerd[1493]: time="2025-09-04T23:50:32.973696048Z" level=info msg="Ensure that sandbox eec079f8379a15ab7a87fe4181f231efd532b0b64310b84f4ee6dd830ac96453 in task-service has been cleanup successfully" Sep 4 23:50:32.973754 containerd[1493]: time="2025-09-04T23:50:32.973705195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d8krf,Uid:75d773f2-41cf-4b0b-ab63-23228f5501e2,Namespace:calico-system,Attempt:3,}" Sep 4 23:50:32.974002 containerd[1493]: time="2025-09-04T23:50:32.973976456Z" level=info msg="TearDown network for sandbox \"eec079f8379a15ab7a87fe4181f231efd532b0b64310b84f4ee6dd830ac96453\" successfully" Sep 4 23:50:32.974002 containerd[1493]: time="2025-09-04T23:50:32.973992236Z" level=info msg="StopPodSandbox for \"eec079f8379a15ab7a87fe4181f231efd532b0b64310b84f4ee6dd830ac96453\" returns successfully" Sep 4 23:50:32.974443 containerd[1493]: time="2025-09-04T23:50:32.974396303Z" level=info msg="StopPodSandbox for \"ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83\"" Sep 4 23:50:32.974769 containerd[1493]: time="2025-09-04T23:50:32.974683604Z" level=info msg="TearDown network for sandbox \"ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83\" successfully" Sep 4 23:50:32.974769 containerd[1493]: time="2025-09-04T23:50:32.974702610Z" level=info msg="StopPodSandbox for \"ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83\" returns successfully" Sep 4 23:50:32.975034 containerd[1493]: time="2025-09-04T23:50:32.974901081Z" level=info msg="StopPodSandbox for \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\"" Sep 4 23:50:32.975034 containerd[1493]: time="2025-09-04T23:50:32.974981716Z" level=info msg="TearDown network for sandbox \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\" successfully" Sep 4 23:50:32.975034 containerd[1493]: time="2025-09-04T23:50:32.974991094Z" level=info msg="StopPodSandbox for \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\" returns successfully" Sep 4 23:50:32.975398 containerd[1493]: time="2025-09-04T23:50:32.975373469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fcb7bdb4-fjbtd,Uid:53444f07-cee6-411d-9fba-8985320ac23a,Namespace:calico-system,Attempt:3,}" Sep 4 23:50:32.975764 kubelet[2748]: I0904 23:50:32.975731 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88e832c82497569883a33eb8c2fa7ca6e301915436548a04ba2f8ccf06500f70" Sep 4 23:50:32.976037 containerd[1493]: time="2025-09-04T23:50:32.976018828Z" level=info msg="StopPodSandbox for \"88e832c82497569883a33eb8c2fa7ca6e301915436548a04ba2f8ccf06500f70\"" Sep 4 23:50:32.976276 containerd[1493]: time="2025-09-04T23:50:32.976250031Z" level=info msg="Ensure that sandbox 88e832c82497569883a33eb8c2fa7ca6e301915436548a04ba2f8ccf06500f70 in task-service has been cleanup successfully" Sep 4 23:50:32.976557 containerd[1493]: time="2025-09-04T23:50:32.976470575Z" level=info msg="TearDown network for sandbox \"88e832c82497569883a33eb8c2fa7ca6e301915436548a04ba2f8ccf06500f70\" successfully" Sep 4 23:50:32.976557 containerd[1493]: time="2025-09-04T23:50:32.976500342Z" level=info msg="StopPodSandbox for \"88e832c82497569883a33eb8c2fa7ca6e301915436548a04ba2f8ccf06500f70\" returns successfully" Sep 4 23:50:32.976800 containerd[1493]: time="2025-09-04T23:50:32.976778696Z" level=info msg="StopPodSandbox for \"f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814\"" Sep 4 23:50:32.976905 containerd[1493]: time="2025-09-04T23:50:32.976886253Z" level=info msg="TearDown network for sandbox \"f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814\" successfully" Sep 4 23:50:32.976905 containerd[1493]: time="2025-09-04T23:50:32.976900791Z" level=info msg="StopPodSandbox for \"f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814\" returns successfully" Sep 4 23:50:32.977171 containerd[1493]: time="2025-09-04T23:50:32.977111515Z" level=info msg="StopPodSandbox for \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\"" Sep 4 23:50:32.977217 containerd[1493]: time="2025-09-04T23:50:32.977204525Z" level=info msg="TearDown network for sandbox \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\" successfully" Sep 4 23:50:32.977254 containerd[1493]: time="2025-09-04T23:50:32.977215465Z" level=info msg="StopPodSandbox for \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\" returns successfully" Sep 4 23:50:32.977474 kubelet[2748]: I0904 23:50:32.977453 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94a73c1404240a627b007234a5c7fb472280406044f84368e9b1453d6b4b4455" Sep 4 23:50:32.977751 containerd[1493]: time="2025-09-04T23:50:32.977602228Z" level=info msg="StopPodSandbox for \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\"" Sep 4 23:50:32.977751 containerd[1493]: time="2025-09-04T23:50:32.977685508Z" level=info msg="TearDown network for sandbox \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\" successfully" Sep 4 23:50:32.977751 containerd[1493]: time="2025-09-04T23:50:32.977695718Z" level=info msg="StopPodSandbox for \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\" returns successfully" Sep 4 23:50:32.977839 containerd[1493]: time="2025-09-04T23:50:32.977776372Z" level=info msg="StopPodSandbox for \"94a73c1404240a627b007234a5c7fb472280406044f84368e9b1453d6b4b4455\"" Sep 4 23:50:32.977942 containerd[1493]: time="2025-09-04T23:50:32.977923926Z" level=info msg="Ensure that sandbox 94a73c1404240a627b007234a5c7fb472280406044f84368e9b1453d6b4b4455 in task-service has been cleanup successfully" Sep 4 23:50:32.978082 containerd[1493]: time="2025-09-04T23:50:32.978065638Z" level=info msg="TearDown network for sandbox \"94a73c1404240a627b007234a5c7fb472280406044f84368e9b1453d6b4b4455\" successfully" Sep 4 23:50:32.978082 containerd[1493]: time="2025-09-04T23:50:32.978077571Z" level=info msg="StopPodSandbox for \"94a73c1404240a627b007234a5c7fb472280406044f84368e9b1453d6b4b4455\" returns successfully" Sep 4 23:50:32.978270 containerd[1493]: time="2025-09-04T23:50:32.978231917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c45fc5f4-726pc,Uid:24110c1b-5cfd-4ff0-b414-4604e0a02acc,Namespace:calico-apiserver,Attempt:4,}" Sep 4 23:50:32.978377 containerd[1493]: time="2025-09-04T23:50:32.978352259Z" level=info msg="StopPodSandbox for \"28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de\"" Sep 4 23:50:32.978451 containerd[1493]: time="2025-09-04T23:50:32.978437481Z" level=info msg="TearDown network for sandbox \"28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de\" successfully" Sep 4 23:50:32.978501 containerd[1493]: time="2025-09-04T23:50:32.978449715Z" level=info msg="StopPodSandbox for \"28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de\" returns successfully" Sep 4 23:50:32.978802 containerd[1493]: time="2025-09-04T23:50:32.978672082Z" level=info msg="StopPodSandbox for \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\"" Sep 4 23:50:32.978802 containerd[1493]: time="2025-09-04T23:50:32.978747948Z" level=info msg="TearDown network for sandbox \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\" successfully" Sep 4 23:50:32.978802 containerd[1493]: time="2025-09-04T23:50:32.978757446Z" level=info msg="StopPodSandbox for \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\" returns successfully" Sep 4 23:50:32.978933 kubelet[2748]: E0904 23:50:32.978917 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:32.979177 containerd[1493]: time="2025-09-04T23:50:32.979141283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q7s5n,Uid:72827434-065b-42a4-a2fc-ad0f0cccd362,Namespace:kube-system,Attempt:3,}" Sep 4 23:50:33.039460 containerd[1493]: time="2025-09-04T23:50:33.039261063Z" level=error msg="Failed to destroy network for sandbox \"221a336af55be2c9646604fea5973e52aef6c0d002c49890c63709382cb83bfc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:33.039697 containerd[1493]: time="2025-09-04T23:50:33.039673605Z" level=error msg="encountered an error cleaning up failed sandbox \"221a336af55be2c9646604fea5973e52aef6c0d002c49890c63709382cb83bfc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:33.039766 containerd[1493]: time="2025-09-04T23:50:33.039737017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c45fc5f4-77ks8,Uid:d64b33cd-4dd7-45f1-991e-5a59b10c693d,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"221a336af55be2c9646604fea5973e52aef6c0d002c49890c63709382cb83bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:33.040047 kubelet[2748]: E0904 23:50:33.040004 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"221a336af55be2c9646604fea5973e52aef6c0d002c49890c63709382cb83bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:33.040110 kubelet[2748]: E0904 23:50:33.040078 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"221a336af55be2c9646604fea5973e52aef6c0d002c49890c63709382cb83bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c45fc5f4-77ks8" Sep 4 23:50:33.040110 kubelet[2748]: E0904 23:50:33.040102 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"221a336af55be2c9646604fea5973e52aef6c0d002c49890c63709382cb83bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78c45fc5f4-77ks8" Sep 4 23:50:33.040255 kubelet[2748]: E0904 23:50:33.040196 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78c45fc5f4-77ks8_calico-apiserver(d64b33cd-4dd7-45f1-991e-5a59b10c693d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78c45fc5f4-77ks8_calico-apiserver(d64b33cd-4dd7-45f1-991e-5a59b10c693d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"221a336af55be2c9646604fea5973e52aef6c0d002c49890c63709382cb83bfc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78c45fc5f4-77ks8" podUID="d64b33cd-4dd7-45f1-991e-5a59b10c693d" Sep 4 23:50:33.055824 containerd[1493]: time="2025-09-04T23:50:33.055748213Z" level=error msg="Failed to destroy network for sandbox \"c3519b41cb213537cd96c2cdfd112aa2c9f1112504a83a8b42f3378da4e2d339\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:33.056258 containerd[1493]: time="2025-09-04T23:50:33.056232523Z" level=error msg="encountered an error cleaning up failed sandbox \"c3519b41cb213537cd96c2cdfd112aa2c9f1112504a83a8b42f3378da4e2d339\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:33.056317 containerd[1493]: time="2025-09-04T23:50:33.056300673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-754df77889-htb2w,Uid:c4b9b516-8bab-431f-a718-d4b546f75053,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c3519b41cb213537cd96c2cdfd112aa2c9f1112504a83a8b42f3378da4e2d339\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:33.056553 kubelet[2748]: E0904 23:50:33.056514 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3519b41cb213537cd96c2cdfd112aa2c9f1112504a83a8b42f3378da4e2d339\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:33.056614 kubelet[2748]: E0904 23:50:33.056572 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3519b41cb213537cd96c2cdfd112aa2c9f1112504a83a8b42f3378da4e2d339\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-754df77889-htb2w" Sep 4 23:50:33.056614 kubelet[2748]: E0904 23:50:33.056599 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3519b41cb213537cd96c2cdfd112aa2c9f1112504a83a8b42f3378da4e2d339\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-754df77889-htb2w" Sep 4 23:50:33.056691 kubelet[2748]: E0904 23:50:33.056658 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-754df77889-htb2w_calico-system(c4b9b516-8bab-431f-a718-d4b546f75053)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-754df77889-htb2w_calico-system(c4b9b516-8bab-431f-a718-d4b546f75053)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3519b41cb213537cd96c2cdfd112aa2c9f1112504a83a8b42f3378da4e2d339\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-754df77889-htb2w" podUID="c4b9b516-8bab-431f-a718-d4b546f75053" Sep 4 23:50:33.117081 containerd[1493]: time="2025-09-04T23:50:33.116997941Z" level=error msg="Failed to destroy network for sandbox \"3ca6d1079abfb546e2e4596540954c2f5b262e075cce535e55717df01a347d26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:33.117600 containerd[1493]: time="2025-09-04T23:50:33.117558066Z" level=error msg="encountered an error cleaning up failed sandbox \"3ca6d1079abfb546e2e4596540954c2f5b262e075cce535e55717df01a347d26\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:33.117671 containerd[1493]: time="2025-09-04T23:50:33.117643520Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-28pvw,Uid:a98fe27d-f9b2-46e3-a9a5-1f9fa577590e,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"3ca6d1079abfb546e2e4596540954c2f5b262e075cce535e55717df01a347d26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:33.117973 kubelet[2748]: E0904 23:50:33.117932 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ca6d1079abfb546e2e4596540954c2f5b262e075cce535e55717df01a347d26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:33.118048 kubelet[2748]: E0904 23:50:33.118003 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ca6d1079abfb546e2e4596540954c2f5b262e075cce535e55717df01a347d26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-28pvw" Sep 4 23:50:33.118048 kubelet[2748]: E0904 23:50:33.118026 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ca6d1079abfb546e2e4596540954c2f5b262e075cce535e55717df01a347d26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-28pvw" Sep 4 23:50:33.118105 kubelet[2748]: E0904 23:50:33.118086 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-28pvw_calico-system(a98fe27d-f9b2-46e3-a9a5-1f9fa577590e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-28pvw_calico-system(a98fe27d-f9b2-46e3-a9a5-1f9fa577590e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ca6d1079abfb546e2e4596540954c2f5b262e075cce535e55717df01a347d26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-28pvw" podUID="a98fe27d-f9b2-46e3-a9a5-1f9fa577590e" Sep 4 23:50:33.250948 systemd[1]: run-netns-cni\x2db9a61db6\x2d5b01\x2d98d0\x2dd82d\x2d66b96fd10275.mount: Deactivated successfully. Sep 4 23:50:33.251074 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88e832c82497569883a33eb8c2fa7ca6e301915436548a04ba2f8ccf06500f70-shm.mount: Deactivated successfully. Sep 4 23:50:33.251193 systemd[1]: run-netns-cni\x2d740183d9\x2d3939\x2d50b2\x2dc51d\x2d3c1b221c21f9.mount: Deactivated successfully. Sep 4 23:50:33.251301 systemd[1]: run-netns-cni\x2d3f60b102\x2d0626\x2df29c\x2d452a\x2d74e6dba0e01e.mount: Deactivated successfully. Sep 4 23:50:33.251395 systemd[1]: run-netns-cni\x2dcae1fd08\x2d97e4\x2ded1d\x2d4945\x2dcd368f51a144.mount: Deactivated successfully. Sep 4 23:50:33.302951 containerd[1493]: time="2025-09-04T23:50:33.302800645Z" level=error msg="Failed to destroy network for sandbox \"5ca88b11a193e5a719182285f587d742d5955a553f51361293437539de326f44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:33.303517 containerd[1493]: time="2025-09-04T23:50:33.303238877Z" level=error msg="encountered an error cleaning up failed sandbox \"5ca88b11a193e5a719182285f587d742d5955a553f51361293437539de326f44\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:33.303517 containerd[1493]: time="2025-09-04T23:50:33.303306226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z7mb6,Uid:cfe8ce7b-a276-49b2-b396-cff3cb8bdc33,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"5ca88b11a193e5a719182285f587d742d5955a553f51361293437539de326f44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:33.303749 kubelet[2748]: E0904 23:50:33.303589 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ca88b11a193e5a719182285f587d742d5955a553f51361293437539de326f44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:50:33.303749 kubelet[2748]: E0904 23:50:33.303659 2748 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ca88b11a193e5a719182285f587d742d5955a553f51361293437539de326f44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z7mb6" Sep 4 23:50:33.303749 kubelet[2748]: E0904 23:50:33.303682 2748 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ca88b11a193e5a719182285f587d742d5955a553f51361293437539de326f44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z7mb6" Sep 4 23:50:33.303856 kubelet[2748]: E0904 23:50:33.303736 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-z7mb6_kube-system(cfe8ce7b-a276-49b2-b396-cff3cb8bdc33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-z7mb6_kube-system(cfe8ce7b-a276-49b2-b396-cff3cb8bdc33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ca88b11a193e5a719182285f587d742d5955a553f51361293437539de326f44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-z7mb6" podUID="cfe8ce7b-a276-49b2-b396-cff3cb8bdc33" Sep 4 23:50:33.305547 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ca88b11a193e5a719182285f587d742d5955a553f51361293437539de326f44-shm.mount: Deactivated successfully. Sep 4 23:50:33.982300 kubelet[2748]: I0904 23:50:33.982260 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3519b41cb213537cd96c2cdfd112aa2c9f1112504a83a8b42f3378da4e2d339" Sep 4 23:50:33.982841 containerd[1493]: time="2025-09-04T23:50:33.982787456Z" level=info msg="StopPodSandbox for \"c3519b41cb213537cd96c2cdfd112aa2c9f1112504a83a8b42f3378da4e2d339\"" Sep 4 23:50:33.983350 containerd[1493]: time="2025-09-04T23:50:33.983085328Z" level=info msg="Ensure that sandbox c3519b41cb213537cd96c2cdfd112aa2c9f1112504a83a8b42f3378da4e2d339 in task-service has been cleanup successfully" Sep 4 23:50:33.983557 containerd[1493]: time="2025-09-04T23:50:33.983461560Z" level=info msg="TearDown network for sandbox \"c3519b41cb213537cd96c2cdfd112aa2c9f1112504a83a8b42f3378da4e2d339\" successfully" Sep 4 23:50:33.983557 containerd[1493]: time="2025-09-04T23:50:33.983490535Z" level=info msg="StopPodSandbox for \"c3519b41cb213537cd96c2cdfd112aa2c9f1112504a83a8b42f3378da4e2d339\" returns successfully" Sep 4 23:50:33.983946 containerd[1493]: time="2025-09-04T23:50:33.983916132Z" level=info msg="StopPodSandbox for \"fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc\"" Sep 4 23:50:33.984027 containerd[1493]: time="2025-09-04T23:50:33.984010284Z" level=info msg="TearDown network for sandbox \"fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc\" successfully" Sep 4 23:50:33.984027 containerd[1493]: time="2025-09-04T23:50:33.984024170Z" level=info msg="StopPodSandbox for \"fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc\" returns successfully" Sep 4 23:50:33.984337 containerd[1493]: time="2025-09-04T23:50:33.984313616Z" level=info msg="StopPodSandbox for \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\"" Sep 4 23:50:33.984412 containerd[1493]: time="2025-09-04T23:50:33.984394772Z" level=info msg="TearDown network for sandbox \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\" successfully" Sep 4 23:50:33.984412 containerd[1493]: time="2025-09-04T23:50:33.984408548Z" level=info msg="StopPodSandbox for \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\" returns successfully" Sep 4 23:50:33.984869 containerd[1493]: time="2025-09-04T23:50:33.984742619Z" level=info msg="StopPodSandbox for \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\"" Sep 4 23:50:33.984869 containerd[1493]: time="2025-09-04T23:50:33.984821000Z" level=info msg="TearDown network for sandbox \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\" successfully" Sep 4 23:50:33.984869 containerd[1493]: time="2025-09-04T23:50:33.984829967Z" level=info msg="StopPodSandbox for \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\" returns successfully" Sep 4 23:50:33.985770 containerd[1493]: time="2025-09-04T23:50:33.985417856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-754df77889-htb2w,Uid:c4b9b516-8bab-431f-a718-d4b546f75053,Namespace:calico-system,Attempt:4,}" Sep 4 23:50:33.986325 kubelet[2748]: I0904 23:50:33.985891 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ca6d1079abfb546e2e4596540954c2f5b262e075cce535e55717df01a347d26" Sep 4 23:50:33.986477 containerd[1493]: time="2025-09-04T23:50:33.986431521Z" level=info msg="StopPodSandbox for \"3ca6d1079abfb546e2e4596540954c2f5b262e075cce535e55717df01a347d26\"" Sep 4 23:50:33.986462 systemd[1]: run-netns-cni\x2d5e62ae15\x2dfa3f\x2daf75\x2d954b\x2d5e614b960abc.mount: Deactivated successfully. Sep 4 23:50:33.986722 containerd[1493]: time="2025-09-04T23:50:33.986682132Z" level=info msg="Ensure that sandbox 3ca6d1079abfb546e2e4596540954c2f5b262e075cce535e55717df01a347d26 in task-service has been cleanup successfully" Sep 4 23:50:33.987950 kubelet[2748]: I0904 23:50:33.987919 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ca88b11a193e5a719182285f587d742d5955a553f51361293437539de326f44" Sep 4 23:50:33.988922 containerd[1493]: time="2025-09-04T23:50:33.988894078Z" level=info msg="StopPodSandbox for \"5ca88b11a193e5a719182285f587d742d5955a553f51361293437539de326f44\"" Sep 4 23:50:33.989105 containerd[1493]: time="2025-09-04T23:50:33.989074956Z" level=info msg="Ensure that sandbox 5ca88b11a193e5a719182285f587d742d5955a553f51361293437539de326f44 in task-service has been cleanup successfully" Sep 4 23:50:33.990557 containerd[1493]: time="2025-09-04T23:50:33.989262867Z" level=info msg="TearDown network for sandbox \"3ca6d1079abfb546e2e4596540954c2f5b262e075cce535e55717df01a347d26\" successfully" Sep 4 23:50:33.990557 containerd[1493]: time="2025-09-04T23:50:33.989280099Z" level=info msg="StopPodSandbox for \"3ca6d1079abfb546e2e4596540954c2f5b262e075cce535e55717df01a347d26\" returns successfully" Sep 4 23:50:33.990557 containerd[1493]: time="2025-09-04T23:50:33.989545158Z" level=info msg="StopPodSandbox for \"d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464\"" Sep 4 23:50:33.990557 containerd[1493]: time="2025-09-04T23:50:33.989623219Z" level=info msg="TearDown network for sandbox \"d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464\" successfully" Sep 4 23:50:33.990557 containerd[1493]: time="2025-09-04T23:50:33.989632025Z" level=info msg="StopPodSandbox for \"d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464\" returns successfully" Sep 4 23:50:33.990557 containerd[1493]: time="2025-09-04T23:50:33.989982248Z" level=info msg="StopPodSandbox for \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\"" Sep 4 23:50:33.990557 containerd[1493]: time="2025-09-04T23:50:33.990076118Z" level=info msg="TearDown network for sandbox \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\" successfully" Sep 4 23:50:33.990557 containerd[1493]: time="2025-09-04T23:50:33.990086527Z" level=info msg="StopPodSandbox for \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\" returns successfully" Sep 4 23:50:33.989462 systemd[1]: run-netns-cni\x2dba9bee04\x2d4347\x2d749a\x2d869d\x2d3f7a37b6fafd.mount: Deactivated successfully. Sep 4 23:50:33.991378 containerd[1493]: time="2025-09-04T23:50:33.991208091Z" level=info msg="TearDown network for sandbox \"5ca88b11a193e5a719182285f587d742d5955a553f51361293437539de326f44\" successfully" Sep 4 23:50:33.991378 containerd[1493]: time="2025-09-04T23:50:33.991228499Z" level=info msg="StopPodSandbox for \"5ca88b11a193e5a719182285f587d742d5955a553f51361293437539de326f44\" returns successfully" Sep 4 23:50:33.991378 containerd[1493]: time="2025-09-04T23:50:33.991246113Z" level=info msg="StopPodSandbox for \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\"" Sep 4 23:50:33.991378 containerd[1493]: time="2025-09-04T23:50:33.991355263Z" level=info msg="TearDown network for sandbox \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\" successfully" Sep 4 23:50:33.991378 containerd[1493]: time="2025-09-04T23:50:33.991368058Z" level=info msg="StopPodSandbox for \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\" returns successfully" Sep 4 23:50:33.991515 containerd[1493]: time="2025-09-04T23:50:33.991481896Z" level=info msg="StopPodSandbox for \"29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7\"" Sep 4 23:50:33.991611 containerd[1493]: time="2025-09-04T23:50:33.991591557Z" level=info msg="TearDown network for sandbox \"29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7\" successfully" Sep 4 23:50:33.991611 containerd[1493]: time="2025-09-04T23:50:33.991608510Z" level=info msg="StopPodSandbox for \"29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7\" returns successfully" Sep 4 23:50:33.992236 containerd[1493]: time="2025-09-04T23:50:33.991992587Z" level=info msg="StopPodSandbox for \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\"" Sep 4 23:50:33.992236 containerd[1493]: time="2025-09-04T23:50:33.992019288Z" level=info msg="StopPodSandbox for \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\"" Sep 4 23:50:33.992236 containerd[1493]: time="2025-09-04T23:50:33.992098690Z" level=info msg="TearDown network for sandbox \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\" successfully" Sep 4 23:50:33.992236 containerd[1493]: time="2025-09-04T23:50:33.992111104Z" level=info msg="TearDown network for sandbox \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\" successfully" Sep 4 23:50:33.992236 containerd[1493]: time="2025-09-04T23:50:33.992176530Z" level=info msg="StopPodSandbox for \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\" returns successfully" Sep 4 23:50:33.992236 containerd[1493]: time="2025-09-04T23:50:33.992113238Z" level=info msg="StopPodSandbox for \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\" returns successfully" Sep 4 23:50:33.992786 containerd[1493]: time="2025-09-04T23:50:33.992765029Z" level=info msg="StopPodSandbox for \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\"" Sep 4 23:50:33.993083 containerd[1493]: time="2025-09-04T23:50:33.992835094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-28pvw,Uid:a98fe27d-f9b2-46e3-a9a5-1f9fa577590e,Namespace:calico-system,Attempt:5,}" Sep 4 23:50:33.993083 containerd[1493]: time="2025-09-04T23:50:33.992926399Z" level=info msg="TearDown network for sandbox \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\" successfully" Sep 4 23:50:33.993083 containerd[1493]: time="2025-09-04T23:50:33.992940156Z" level=info msg="StopPodSandbox for \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\" returns successfully" Sep 4 23:50:33.992953 systemd[1]: run-netns-cni\x2d96348546\x2ddad1\x2dbc7c\x2d4c85\x2d739502f86066.mount: Deactivated successfully. Sep 4 23:50:33.993298 kubelet[2748]: E0904 23:50:33.993158 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:33.993638 containerd[1493]: time="2025-09-04T23:50:33.993605924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z7mb6,Uid:cfe8ce7b-a276-49b2-b396-cff3cb8bdc33,Namespace:kube-system,Attempt:4,}" Sep 4 23:50:33.993964 kubelet[2748]: I0904 23:50:33.993937 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="221a336af55be2c9646604fea5973e52aef6c0d002c49890c63709382cb83bfc" Sep 4 23:50:33.994558 containerd[1493]: time="2025-09-04T23:50:33.994530358Z" level=info msg="StopPodSandbox for \"221a336af55be2c9646604fea5973e52aef6c0d002c49890c63709382cb83bfc\"" Sep 4 23:50:33.994760 containerd[1493]: time="2025-09-04T23:50:33.994740712Z" level=info msg="Ensure that sandbox 221a336af55be2c9646604fea5973e52aef6c0d002c49890c63709382cb83bfc in task-service has been cleanup successfully" Sep 4 23:50:33.994990 containerd[1493]: time="2025-09-04T23:50:33.994969672Z" level=info msg="TearDown network for sandbox \"221a336af55be2c9646604fea5973e52aef6c0d002c49890c63709382cb83bfc\" successfully" Sep 4 23:50:33.994990 containerd[1493]: time="2025-09-04T23:50:33.994985983Z" level=info msg="StopPodSandbox for \"221a336af55be2c9646604fea5973e52aef6c0d002c49890c63709382cb83bfc\" returns successfully" Sep 4 23:50:33.995388 containerd[1493]: time="2025-09-04T23:50:33.995364088Z" level=info msg="StopPodSandbox for \"9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d\"" Sep 4 23:50:33.995470 containerd[1493]: time="2025-09-04T23:50:33.995448390Z" level=info msg="TearDown network for sandbox \"9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d\" successfully" Sep 4 23:50:33.995470 containerd[1493]: time="2025-09-04T23:50:33.995465804Z" level=info msg="StopPodSandbox for \"9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d\" returns successfully" Sep 4 23:50:33.995833 containerd[1493]: time="2025-09-04T23:50:33.995798743Z" level=info msg="StopPodSandbox for \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\"" Sep 4 23:50:33.995922 containerd[1493]: time="2025-09-04T23:50:33.995895529Z" level=info msg="TearDown network for sandbox \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\" successfully" Sep 4 23:50:33.995922 containerd[1493]: time="2025-09-04T23:50:33.995906741Z" level=info msg="StopPodSandbox for \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\" returns successfully" Sep 4 23:50:33.996187 containerd[1493]: time="2025-09-04T23:50:33.996152813Z" level=info msg="StopPodSandbox for \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\"" Sep 4 23:50:33.996250 containerd[1493]: time="2025-09-04T23:50:33.996233457Z" level=info msg="TearDown network for sandbox \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\" successfully" Sep 4 23:50:33.996250 containerd[1493]: time="2025-09-04T23:50:33.996244619Z" level=info msg="StopPodSandbox for \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\" returns successfully" Sep 4 23:50:33.996667 containerd[1493]: time="2025-09-04T23:50:33.996644096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c45fc5f4-77ks8,Uid:d64b33cd-4dd7-45f1-991e-5a59b10c693d,Namespace:calico-apiserver,Attempt:4,}" Sep 4 23:50:34.249051 systemd[1]: run-netns-cni\x2d5d66a4b9\x2dd199\x2d3f29\x2d96db\x2d4c198e6c0072.mount: Deactivated successfully. Sep 4 23:50:36.151772 kubelet[2748]: E0904 23:50:36.151714 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:37.940296 systemd[1]: Started sshd@10-10.0.0.62:22-10.0.0.1:38088.service - OpenSSH per-connection server daemon (10.0.0.1:38088). Sep 4 23:50:38.112397 sshd[4627]: Accepted publickey for core from 10.0.0.1 port 38088 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:50:38.114296 sshd-session[4627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:38.118875 systemd-logind[1475]: New session 11 of user core. Sep 4 23:50:38.125254 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 23:50:41.192750 sshd[4629]: Connection closed by 10.0.0.1 port 38088 Sep 4 23:50:41.193190 sshd-session[4627]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:41.197337 systemd[1]: sshd@10-10.0.0.62:22-10.0.0.1:38088.service: Deactivated successfully. Sep 4 23:50:41.199848 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 23:50:41.200732 systemd-logind[1475]: Session 11 logged out. Waiting for processes to exit. Sep 4 23:50:41.201628 systemd-logind[1475]: Removed session 11. Sep 4 23:50:41.661601 containerd[1493]: time="2025-09-04T23:50:41.661436525Z" level=info msg="CreateContainer within sandbox \"d73c3ad910f60ce2bef3e3930982250d2e4d9b49dce9af6e0674e95007c0c1c5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"17490ec24891abc4e4a056df85f3776013f58f0edc79b0c0bc9bd24e1241a525\"" Sep 4 23:50:41.662263 containerd[1493]: time="2025-09-04T23:50:41.662218954Z" level=info msg="StartContainer for \"17490ec24891abc4e4a056df85f3776013f58f0edc79b0c0bc9bd24e1241a525\"" Sep 4 23:50:41.724419 systemd[1]: Started cri-containerd-17490ec24891abc4e4a056df85f3776013f58f0edc79b0c0bc9bd24e1241a525.scope - libcontainer container 17490ec24891abc4e4a056df85f3776013f58f0edc79b0c0bc9bd24e1241a525. Sep 4 23:50:42.387583 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 23:50:42.387799 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 23:50:45.034712 containerd[1493]: time="2025-09-04T23:50:45.034634839Z" level=info msg="StartContainer for \"17490ec24891abc4e4a056df85f3776013f58f0edc79b0c0bc9bd24e1241a525\" returns successfully" Sep 4 23:50:45.035769 kubelet[2748]: E0904 23:50:45.035247 2748 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.884s" Sep 4 23:50:45.036171 kubelet[2748]: E0904 23:50:45.035852 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:46.040822 kubelet[2748]: E0904 23:50:46.040781 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:46.202051 kubelet[2748]: I0904 23:50:46.201958 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wnjtd" podStartSLOduration=14.421906208 podStartE2EDuration="1m5.201942186s" podCreationTimestamp="2025-09-04 23:49:41 +0000 UTC" firstStartedPulling="2025-09-04 23:49:41.883807219 +0000 UTC m=+36.828891123" lastFinishedPulling="2025-09-04 23:50:32.663843187 +0000 UTC m=+87.608927101" observedRunningTime="2025-09-04 23:50:46.20134881 +0000 UTC m=+101.146432724" watchObservedRunningTime="2025-09-04 23:50:46.201942186 +0000 UTC m=+101.147026090" Sep 4 23:50:46.218589 systemd[1]: Started sshd@11-10.0.0.62:22-10.0.0.1:46288.service - OpenSSH per-connection server daemon (10.0.0.1:46288). Sep 4 23:50:46.312336 sshd[4688]: Accepted publickey for core from 10.0.0.1 port 46288 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:50:46.314433 sshd-session[4688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:46.321483 systemd-logind[1475]: New session 12 of user core. Sep 4 23:50:46.327511 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 23:50:46.763483 sshd[4690]: Connection closed by 10.0.0.1 port 46288 Sep 4 23:50:46.763958 sshd-session[4688]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:46.769926 systemd[1]: sshd@11-10.0.0.62:22-10.0.0.1:46288.service: Deactivated successfully. Sep 4 23:50:46.772919 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 23:50:46.773954 systemd-logind[1475]: Session 12 logged out. Waiting for processes to exit. Sep 4 23:50:46.775188 systemd-logind[1475]: Removed session 12. Sep 4 23:50:48.443474 systemd-networkd[1427]: calic36fce0f458: Link UP Sep 4 23:50:48.446265 systemd-networkd[1427]: calic36fce0f458: Gained carrier Sep 4 23:50:48.543328 containerd[1493]: 2025-09-04 23:50:47.769 [INFO][4782] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 23:50:48.543328 containerd[1493]: 2025-09-04 23:50:47.791 [INFO][4782] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--z7mb6-eth0 coredns-674b8bbfcf- kube-system cfe8ce7b-a276-49b2-b396-cff3cb8bdc33 965 0 2025-09-04 23:49:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-z7mb6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic36fce0f458 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211" Namespace="kube-system" Pod="coredns-674b8bbfcf-z7mb6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z7mb6-" Sep 4 23:50:48.543328 containerd[1493]: 2025-09-04 23:50:47.791 [INFO][4782] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211" Namespace="kube-system" Pod="coredns-674b8bbfcf-z7mb6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z7mb6-eth0" Sep 4 23:50:48.543328 containerd[1493]: 2025-09-04 23:50:48.267 [INFO][4850] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211" HandleID="k8s-pod-network.9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211" Workload="localhost-k8s-coredns--674b8bbfcf--z7mb6-eth0" Sep 4 23:50:48.543328 containerd[1493]: 2025-09-04 23:50:48.268 [INFO][4850] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211" HandleID="k8s-pod-network.9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211" Workload="localhost-k8s-coredns--674b8bbfcf--z7mb6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000be3f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-z7mb6", "timestamp":"2025-09-04 23:50:48.267856086 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 23:50:48.543328 containerd[1493]: 2025-09-04 23:50:48.268 [INFO][4850] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:50:48.543328 containerd[1493]: 2025-09-04 23:50:48.271 [INFO][4850] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:50:48.543328 containerd[1493]: 2025-09-04 23:50:48.272 [INFO][4850] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 23:50:48.543328 containerd[1493]: 2025-09-04 23:50:48.283 [INFO][4850] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211" host="localhost" Sep 4 23:50:48.543328 containerd[1493]: 2025-09-04 23:50:48.290 [INFO][4850] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 23:50:48.543328 containerd[1493]: 2025-09-04 23:50:48.297 [INFO][4850] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 23:50:48.543328 containerd[1493]: 2025-09-04 23:50:48.299 [INFO][4850] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 23:50:48.543328 containerd[1493]: 2025-09-04 23:50:48.302 [INFO][4850] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 23:50:48.543328 containerd[1493]: 2025-09-04 23:50:48.302 [INFO][4850] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211" host="localhost" Sep 4 23:50:48.543328 containerd[1493]: 2025-09-04 23:50:48.304 [INFO][4850] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211 Sep 4 23:50:48.543328 containerd[1493]: 2025-09-04 23:50:48.347 [INFO][4850] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211" host="localhost" Sep 4 23:50:48.543328 containerd[1493]: 2025-09-04 23:50:48.402 [INFO][4850] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211" host="localhost" Sep 4 23:50:48.543328 containerd[1493]: 2025-09-04 23:50:48.402 [INFO][4850] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211" host="localhost" Sep 4 23:50:48.543328 containerd[1493]: 2025-09-04 23:50:48.402 [INFO][4850] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:50:48.543328 containerd[1493]: 2025-09-04 23:50:48.402 [INFO][4850] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211" HandleID="k8s-pod-network.9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211" Workload="localhost-k8s-coredns--674b8bbfcf--z7mb6-eth0" Sep 4 23:50:48.546115 containerd[1493]: 2025-09-04 23:50:48.410 [INFO][4782] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211" Namespace="kube-system" Pod="coredns-674b8bbfcf-z7mb6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z7mb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--z7mb6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cfe8ce7b-a276-49b2-b396-cff3cb8bdc33", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-z7mb6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic36fce0f458", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:50:48.546115 containerd[1493]: 2025-09-04 23:50:48.410 [INFO][4782] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211" Namespace="kube-system" Pod="coredns-674b8bbfcf-z7mb6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z7mb6-eth0" Sep 4 23:50:48.546115 containerd[1493]: 2025-09-04 23:50:48.410 [INFO][4782] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic36fce0f458 ContainerID="9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211" Namespace="kube-system" Pod="coredns-674b8bbfcf-z7mb6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z7mb6-eth0" Sep 4 23:50:48.546115 containerd[1493]: 2025-09-04 23:50:48.446 [INFO][4782] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211" Namespace="kube-system" Pod="coredns-674b8bbfcf-z7mb6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z7mb6-eth0" Sep 4 23:50:48.546115 containerd[1493]: 2025-09-04 23:50:48.447 [INFO][4782] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211" Namespace="kube-system" Pod="coredns-674b8bbfcf-z7mb6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z7mb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--z7mb6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cfe8ce7b-a276-49b2-b396-cff3cb8bdc33", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211", Pod:"coredns-674b8bbfcf-z7mb6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic36fce0f458", MAC:"a2:d2:4f:c9:c7:1f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:50:48.546115 containerd[1493]: 2025-09-04 23:50:48.537 [INFO][4782] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211" Namespace="kube-system" Pod="coredns-674b8bbfcf-z7mb6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z7mb6-eth0" Sep 4 23:50:48.791209 containerd[1493]: time="2025-09-04T23:50:48.790553957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:50:48.791209 containerd[1493]: time="2025-09-04T23:50:48.790640153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:50:48.791209 containerd[1493]: time="2025-09-04T23:50:48.790661433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:48.791209 containerd[1493]: time="2025-09-04T23:50:48.790786071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:48.799381 systemd-networkd[1427]: cali7ad0bf8edad: Link UP Sep 4 23:50:48.800840 systemd-networkd[1427]: cali7ad0bf8edad: Gained carrier Sep 4 23:50:48.847382 systemd[1]: Started cri-containerd-9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211.scope - libcontainer container 9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211. Sep 4 23:50:48.861614 systemd-resolved[1372]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:50:48.874161 containerd[1493]: 2025-09-04 23:50:47.778 [INFO][4797] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 23:50:48.874161 containerd[1493]: 2025-09-04 23:50:47.831 [INFO][4797] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--754df77889--htb2w-eth0 whisker-754df77889- calico-system c4b9b516-8bab-431f-a718-d4b546f75053 1193 0 2025-09-04 23:49:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:754df77889 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-754df77889-htb2w eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7ad0bf8edad [] [] }} ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Namespace="calico-system" Pod="whisker-754df77889-htb2w" WorkloadEndpoint="localhost-k8s-whisker--754df77889--htb2w-" Sep 4 23:50:48.874161 containerd[1493]: 2025-09-04 23:50:47.832 [INFO][4797] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Namespace="calico-system" Pod="whisker-754df77889-htb2w" WorkloadEndpoint="localhost-k8s-whisker--754df77889--htb2w-eth0" Sep 4 23:50:48.874161 containerd[1493]: 2025-09-04 23:50:48.268 [INFO][4854] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" HandleID="k8s-pod-network.2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Workload="localhost-k8s-whisker--754df77889--htb2w-eth0" Sep 4 23:50:48.874161 containerd[1493]: 2025-09-04 23:50:48.268 [INFO][4854] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" HandleID="k8s-pod-network.2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Workload="localhost-k8s-whisker--754df77889--htb2w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004dd150), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-754df77889-htb2w", "timestamp":"2025-09-04 23:50:48.268358397 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 23:50:48.874161 containerd[1493]: 2025-09-04 23:50:48.268 [INFO][4854] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:50:48.874161 containerd[1493]: 2025-09-04 23:50:48.403 [INFO][4854] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:50:48.874161 containerd[1493]: 2025-09-04 23:50:48.405 [INFO][4854] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 23:50:48.874161 containerd[1493]: 2025-09-04 23:50:48.417 [INFO][4854] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" host="localhost" Sep 4 23:50:48.874161 containerd[1493]: 2025-09-04 23:50:48.482 [INFO][4854] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 23:50:48.874161 containerd[1493]: 2025-09-04 23:50:48.546 [INFO][4854] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 23:50:48.874161 containerd[1493]: 2025-09-04 23:50:48.552 [INFO][4854] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 23:50:48.874161 containerd[1493]: 2025-09-04 23:50:48.571 [INFO][4854] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 23:50:48.874161 containerd[1493]: 2025-09-04 23:50:48.571 [INFO][4854] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" host="localhost" Sep 4 23:50:48.874161 containerd[1493]: 2025-09-04 23:50:48.576 [INFO][4854] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01 Sep 4 23:50:48.874161 containerd[1493]: 2025-09-04 23:50:48.633 [INFO][4854] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" host="localhost" Sep 4 23:50:48.874161 containerd[1493]: 2025-09-04 23:50:48.785 [INFO][4854] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" host="localhost" Sep 4 23:50:48.874161 containerd[1493]: 2025-09-04 23:50:48.786 [INFO][4854] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" host="localhost" Sep 4 23:50:48.874161 containerd[1493]: 2025-09-04 23:50:48.787 [INFO][4854] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:50:48.874161 containerd[1493]: 2025-09-04 23:50:48.787 [INFO][4854] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" HandleID="k8s-pod-network.2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Workload="localhost-k8s-whisker--754df77889--htb2w-eth0" Sep 4 23:50:48.874913 containerd[1493]: 2025-09-04 23:50:48.794 [INFO][4797] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Namespace="calico-system" Pod="whisker-754df77889-htb2w" WorkloadEndpoint="localhost-k8s-whisker--754df77889--htb2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--754df77889--htb2w-eth0", GenerateName:"whisker-754df77889-", Namespace:"calico-system", SelfLink:"", UID:"c4b9b516-8bab-431f-a718-d4b546f75053", ResourceVersion:"1193", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 49, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"754df77889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-754df77889-htb2w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7ad0bf8edad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:50:48.874913 containerd[1493]: 2025-09-04 23:50:48.795 [INFO][4797] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Namespace="calico-system" Pod="whisker-754df77889-htb2w" WorkloadEndpoint="localhost-k8s-whisker--754df77889--htb2w-eth0" Sep 4 23:50:48.874913 containerd[1493]: 2025-09-04 23:50:48.795 [INFO][4797] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ad0bf8edad ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Namespace="calico-system" Pod="whisker-754df77889-htb2w" WorkloadEndpoint="localhost-k8s-whisker--754df77889--htb2w-eth0" Sep 4 23:50:48.874913 containerd[1493]: 2025-09-04 23:50:48.801 [INFO][4797] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Namespace="calico-system" Pod="whisker-754df77889-htb2w" WorkloadEndpoint="localhost-k8s-whisker--754df77889--htb2w-eth0" Sep 4 23:50:48.874913 containerd[1493]: 2025-09-04 23:50:48.801 [INFO][4797] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Namespace="calico-system" Pod="whisker-754df77889-htb2w" WorkloadEndpoint="localhost-k8s-whisker--754df77889--htb2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--754df77889--htb2w-eth0", GenerateName:"whisker-754df77889-", Namespace:"calico-system", SelfLink:"", UID:"c4b9b516-8bab-431f-a718-d4b546f75053", ResourceVersion:"1193", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 49, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"754df77889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01", Pod:"whisker-754df77889-htb2w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7ad0bf8edad", MAC:"52:59:7d:9e:d3:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:50:48.874913 containerd[1493]: 2025-09-04 23:50:48.866 [INFO][4797] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Namespace="calico-system" Pod="whisker-754df77889-htb2w" WorkloadEndpoint="localhost-k8s-whisker--754df77889--htb2w-eth0" Sep 4 23:50:48.906369 containerd[1493]: time="2025-09-04T23:50:48.906319674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z7mb6,Uid:cfe8ce7b-a276-49b2-b396-cff3cb8bdc33,Namespace:kube-system,Attempt:4,} returns sandbox id \"9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211\"" Sep 4 23:50:48.907488 kubelet[2748]: E0904 23:50:48.907444 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:49.048886 containerd[1493]: time="2025-09-04T23:50:49.047665445Z" level=info msg="CreateContainer within sandbox \"9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:50:49.053203 containerd[1493]: time="2025-09-04T23:50:49.052874075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:50:49.053203 containerd[1493]: time="2025-09-04T23:50:49.052972313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:50:49.053203 containerd[1493]: time="2025-09-04T23:50:49.053029532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:49.053353 containerd[1493]: time="2025-09-04T23:50:49.053197082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:49.096334 systemd[1]: Started cri-containerd-2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01.scope - libcontainer container 2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01. Sep 4 23:50:49.110474 systemd-resolved[1372]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:50:49.137346 containerd[1493]: time="2025-09-04T23:50:49.137289395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-754df77889-htb2w,Uid:c4b9b516-8bab-431f-a718-d4b546f75053,Namespace:calico-system,Attempt:4,} returns sandbox id \"2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01\"" Sep 4 23:50:49.141626 containerd[1493]: time="2025-09-04T23:50:49.139335948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 4 23:50:49.142149 kernel: bpftool[5144]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 23:50:49.417173 systemd-networkd[1427]: cali74ee87ab03b: Link UP Sep 4 23:50:49.422347 systemd-networkd[1427]: cali74ee87ab03b: Gained carrier Sep 4 23:50:49.449364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3513385269.mount: Deactivated successfully. Sep 4 23:50:49.453040 systemd-networkd[1427]: vxlan.calico: Link UP Sep 4 23:50:49.453053 systemd-networkd[1427]: vxlan.calico: Gained carrier Sep 4 23:50:49.578362 systemd-networkd[1427]: calic36fce0f458: Gained IPv6LL Sep 4 23:50:49.588329 containerd[1493]: 2025-09-04 23:50:47.692 [INFO][4754] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 23:50:49.588329 containerd[1493]: 2025-09-04 23:50:47.714 [INFO][4754] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--d8krf-eth0 goldmane-54d579b49d- calico-system 75d773f2-41cf-4b0b-ab63-23228f5501e2 979 0 2025-09-04 23:49:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-d8krf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali74ee87ab03b [] [] }} ContainerID="9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d" Namespace="calico-system" Pod="goldmane-54d579b49d-d8krf" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d8krf-" Sep 4 23:50:49.588329 containerd[1493]: 2025-09-04 23:50:47.715 [INFO][4754] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d" Namespace="calico-system" Pod="goldmane-54d579b49d-d8krf" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d8krf-eth0" Sep 4 23:50:49.588329 containerd[1493]: 2025-09-04 23:50:48.267 [INFO][4830] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d" HandleID="k8s-pod-network.9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d" Workload="localhost-k8s-goldmane--54d579b49d--d8krf-eth0" Sep 4 23:50:49.588329 containerd[1493]: 2025-09-04 23:50:48.268 [INFO][4830] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d" HandleID="k8s-pod-network.9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d" Workload="localhost-k8s-goldmane--54d579b49d--d8krf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bed40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-d8krf", "timestamp":"2025-09-04 23:50:48.267056717 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 23:50:49.588329 containerd[1493]: 2025-09-04 23:50:48.268 [INFO][4830] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:50:49.588329 containerd[1493]: 2025-09-04 23:50:48.786 [INFO][4830] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:50:49.588329 containerd[1493]: 2025-09-04 23:50:48.786 [INFO][4830] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 23:50:49.588329 containerd[1493]: 2025-09-04 23:50:48.800 [INFO][4830] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d" host="localhost" Sep 4 23:50:49.588329 containerd[1493]: 2025-09-04 23:50:48.972 [INFO][4830] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 23:50:49.588329 containerd[1493]: 2025-09-04 23:50:49.029 [INFO][4830] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 23:50:49.588329 containerd[1493]: 2025-09-04 23:50:49.033 [INFO][4830] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 23:50:49.588329 containerd[1493]: 2025-09-04 23:50:49.038 [INFO][4830] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 23:50:49.588329 containerd[1493]: 2025-09-04 23:50:49.038 [INFO][4830] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d" host="localhost" Sep 4 23:50:49.588329 containerd[1493]: 2025-09-04 23:50:49.040 [INFO][4830] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d Sep 4 23:50:49.588329 containerd[1493]: 2025-09-04 23:50:49.205 [INFO][4830] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d" host="localhost" Sep 4 23:50:49.588329 containerd[1493]: 2025-09-04 23:50:49.403 [INFO][4830] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d" host="localhost" Sep 4 23:50:49.588329 containerd[1493]: 2025-09-04 23:50:49.403 [INFO][4830] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d" host="localhost" Sep 4 23:50:49.588329 containerd[1493]: 2025-09-04 23:50:49.403 [INFO][4830] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:50:49.588329 containerd[1493]: 2025-09-04 23:50:49.403 [INFO][4830] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d" HandleID="k8s-pod-network.9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d" Workload="localhost-k8s-goldmane--54d579b49d--d8krf-eth0" Sep 4 23:50:49.625683 containerd[1493]: 2025-09-04 23:50:49.411 [INFO][4754] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d" Namespace="calico-system" Pod="goldmane-54d579b49d-d8krf" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d8krf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--d8krf-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"75d773f2-41cf-4b0b-ab63-23228f5501e2", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 49, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-d8krf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali74ee87ab03b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:50:49.625683 containerd[1493]: 2025-09-04 23:50:49.412 [INFO][4754] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d" Namespace="calico-system" Pod="goldmane-54d579b49d-d8krf" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d8krf-eth0" Sep 4 23:50:49.625683 containerd[1493]: 2025-09-04 23:50:49.412 [INFO][4754] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74ee87ab03b ContainerID="9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d" Namespace="calico-system" Pod="goldmane-54d579b49d-d8krf" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d8krf-eth0" Sep 4 23:50:49.625683 containerd[1493]: 2025-09-04 23:50:49.423 [INFO][4754] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d" Namespace="calico-system" Pod="goldmane-54d579b49d-d8krf" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d8krf-eth0" Sep 4 23:50:49.625683 containerd[1493]: 2025-09-04 23:50:49.425 [INFO][4754] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d" Namespace="calico-system" Pod="goldmane-54d579b49d-d8krf" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d8krf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--d8krf-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"75d773f2-41cf-4b0b-ab63-23228f5501e2", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 49, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d", Pod:"goldmane-54d579b49d-d8krf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali74ee87ab03b", MAC:"6a:18:90:f6:f5:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:50:49.625683 containerd[1493]: 2025-09-04 23:50:49.583 [INFO][4754] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d" Namespace="calico-system" Pod="goldmane-54d579b49d-d8krf" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d8krf-eth0" Sep 4 23:50:49.668147 containerd[1493]: time="2025-09-04T23:50:49.668071039Z" level=info msg="CreateContainer within sandbox \"9e09d744634d2d84979c321345de13f3f688cc194d9e3252c45fdd04486cc211\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cf70647c96bb509ef01be9586a2325498c537a31cc4e0a0f5ece081314d2d3e6\"" Sep 4 23:50:49.670321 containerd[1493]: time="2025-09-04T23:50:49.669135104Z" level=info msg="StartContainer for \"cf70647c96bb509ef01be9586a2325498c537a31cc4e0a0f5ece081314d2d3e6\"" Sep 4 23:50:49.709300 systemd[1]: Started cri-containerd-cf70647c96bb509ef01be9586a2325498c537a31cc4e0a0f5ece081314d2d3e6.scope - libcontainer container cf70647c96bb509ef01be9586a2325498c537a31cc4e0a0f5ece081314d2d3e6. Sep 4 23:50:49.838007 systemd-networkd[1427]: cali8cf55bef367: Link UP Sep 4 23:50:49.839844 systemd-networkd[1427]: cali8cf55bef367: Gained carrier Sep 4 23:50:49.907368 containerd[1493]: time="2025-09-04T23:50:49.907319011Z" level=info msg="StartContainer for \"cf70647c96bb509ef01be9586a2325498c537a31cc4e0a0f5ece081314d2d3e6\" returns successfully" Sep 4 23:50:49.951802 containerd[1493]: time="2025-09-04T23:50:49.951622638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:50:49.951802 containerd[1493]: time="2025-09-04T23:50:49.951682303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:50:49.951802 containerd[1493]: time="2025-09-04T23:50:49.951695758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:49.952078 containerd[1493]: time="2025-09-04T23:50:49.951809836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:49.974345 systemd[1]: Started cri-containerd-9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d.scope - libcontainer container 9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d. Sep 4 23:50:49.989612 systemd-resolved[1372]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:50:50.017651 containerd[1493]: time="2025-09-04T23:50:50.017588282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d8krf,Uid:75d773f2-41cf-4b0b-ab63-23228f5501e2,Namespace:calico-system,Attempt:3,} returns sandbox id \"9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d\"" Sep 4 23:50:50.058689 kubelet[2748]: E0904 23:50:50.058646 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:50.062538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount290988818.mount: Deactivated successfully. Sep 4 23:50:50.430043 containerd[1493]: 2025-09-04 23:50:47.626 [INFO][4738] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 23:50:50.430043 containerd[1493]: 2025-09-04 23:50:47.690 [INFO][4738] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--78c45fc5f4--726pc-eth0 calico-apiserver-78c45fc5f4- calico-apiserver 24110c1b-5cfd-4ff0-b414-4604e0a02acc 978 0 2025-09-04 23:49:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78c45fc5f4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-78c45fc5f4-726pc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8cf55bef367 [] [] }} ContainerID="c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef" Namespace="calico-apiserver" Pod="calico-apiserver-78c45fc5f4-726pc" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c45fc5f4--726pc-" Sep 4 23:50:50.430043 containerd[1493]: 2025-09-04 23:50:47.690 [INFO][4738] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef" Namespace="calico-apiserver" Pod="calico-apiserver-78c45fc5f4-726pc" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c45fc5f4--726pc-eth0" Sep 4 23:50:50.430043 containerd[1493]: 2025-09-04 23:50:48.267 [INFO][4835] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef" HandleID="k8s-pod-network.c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef" Workload="localhost-k8s-calico--apiserver--78c45fc5f4--726pc-eth0" Sep 4 23:50:50.430043 containerd[1493]: 2025-09-04 23:50:48.269 [INFO][4835] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef" HandleID="k8s-pod-network.c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef" Workload="localhost-k8s-calico--apiserver--78c45fc5f4--726pc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000706230), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-78c45fc5f4-726pc", "timestamp":"2025-09-04 23:50:48.267548437 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 23:50:50.430043 containerd[1493]: 2025-09-04 23:50:48.269 [INFO][4835] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:50:50.430043 containerd[1493]: 2025-09-04 23:50:49.403 [INFO][4835] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:50:50.430043 containerd[1493]: 2025-09-04 23:50:49.404 [INFO][4835] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 23:50:50.430043 containerd[1493]: 2025-09-04 23:50:49.420 [INFO][4835] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef" host="localhost" Sep 4 23:50:50.430043 containerd[1493]: 2025-09-04 23:50:49.431 [INFO][4835] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 23:50:50.430043 containerd[1493]: 2025-09-04 23:50:49.583 [INFO][4835] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 23:50:50.430043 containerd[1493]: 2025-09-04 23:50:49.586 [INFO][4835] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 23:50:50.430043 containerd[1493]: 2025-09-04 23:50:49.591 [INFO][4835] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 23:50:50.430043 containerd[1493]: 2025-09-04 23:50:49.591 [INFO][4835] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef" host="localhost" Sep 4 23:50:50.430043 containerd[1493]: 2025-09-04 23:50:49.593 [INFO][4835] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef Sep 4 23:50:50.430043 containerd[1493]: 2025-09-04 23:50:49.643 [INFO][4835] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef" host="localhost" Sep 4 23:50:50.430043 containerd[1493]: 2025-09-04 23:50:49.825 [INFO][4835] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef" host="localhost" Sep 4 23:50:50.430043 containerd[1493]: 2025-09-04 23:50:49.825 [INFO][4835] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef" host="localhost" Sep 4 23:50:50.430043 containerd[1493]: 2025-09-04 23:50:49.825 [INFO][4835] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:50:50.430043 containerd[1493]: 2025-09-04 23:50:49.825 [INFO][4835] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef" HandleID="k8s-pod-network.c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef" Workload="localhost-k8s-calico--apiserver--78c45fc5f4--726pc-eth0" Sep 4 23:50:50.430897 containerd[1493]: 2025-09-04 23:50:49.830 [INFO][4738] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef" Namespace="calico-apiserver" Pod="calico-apiserver-78c45fc5f4-726pc" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c45fc5f4--726pc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78c45fc5f4--726pc-eth0", GenerateName:"calico-apiserver-78c45fc5f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"24110c1b-5cfd-4ff0-b414-4604e0a02acc", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78c45fc5f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-78c45fc5f4-726pc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8cf55bef367", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:50:50.430897 containerd[1493]: 2025-09-04 23:50:49.830 [INFO][4738] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef" Namespace="calico-apiserver" Pod="calico-apiserver-78c45fc5f4-726pc" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c45fc5f4--726pc-eth0" Sep 4 23:50:50.430897 containerd[1493]: 2025-09-04 23:50:49.830 [INFO][4738] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8cf55bef367 ContainerID="c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef" Namespace="calico-apiserver" Pod="calico-apiserver-78c45fc5f4-726pc" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c45fc5f4--726pc-eth0" Sep 4 23:50:50.430897 containerd[1493]: 2025-09-04 23:50:49.840 [INFO][4738] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef" Namespace="calico-apiserver" Pod="calico-apiserver-78c45fc5f4-726pc" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c45fc5f4--726pc-eth0" Sep 4 23:50:50.430897 containerd[1493]: 2025-09-04 23:50:49.840 [INFO][4738] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef" Namespace="calico-apiserver" Pod="calico-apiserver-78c45fc5f4-726pc" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c45fc5f4--726pc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78c45fc5f4--726pc-eth0", GenerateName:"calico-apiserver-78c45fc5f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"24110c1b-5cfd-4ff0-b414-4604e0a02acc", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78c45fc5f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef", Pod:"calico-apiserver-78c45fc5f4-726pc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8cf55bef367", MAC:"b6:69:8a:a5:4f:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:50:50.430897 containerd[1493]: 2025-09-04 23:50:50.423 [INFO][4738] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef" Namespace="calico-apiserver" Pod="calico-apiserver-78c45fc5f4-726pc" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c45fc5f4--726pc-eth0" Sep 4 23:50:50.538321 systemd-networkd[1427]: cali7ad0bf8edad: Gained IPv6LL Sep 4 23:50:50.538782 systemd-networkd[1427]: vxlan.calico: Gained IPv6LL Sep 4 23:50:50.693532 containerd[1493]: time="2025-09-04T23:50:50.692376915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:50:50.693532 containerd[1493]: time="2025-09-04T23:50:50.692474872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:50:50.693532 containerd[1493]: time="2025-09-04T23:50:50.692494430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:50.693532 containerd[1493]: time="2025-09-04T23:50:50.693377899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:50.730354 systemd-networkd[1427]: cali74ee87ab03b: Gained IPv6LL Sep 4 23:50:50.737469 systemd[1]: Started cri-containerd-c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef.scope - libcontainer container c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef. Sep 4 23:50:50.755274 systemd-resolved[1372]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:50:50.783786 containerd[1493]: time="2025-09-04T23:50:50.783731016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c45fc5f4-726pc,Uid:24110c1b-5cfd-4ff0-b414-4604e0a02acc,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef\"" Sep 4 23:50:50.986413 systemd-networkd[1427]: cali8cf55bef367: Gained IPv6LL Sep 4 23:50:51.076455 kubelet[2748]: E0904 23:50:51.076390 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:51.335672 systemd-networkd[1427]: cali368a7a3bed4: Link UP Sep 4 23:50:51.335876 systemd-networkd[1427]: cali368a7a3bed4: Gained carrier Sep 4 23:50:51.643381 kubelet[2748]: I0904 23:50:51.642618 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-z7mb6" podStartSLOduration=99.642597183 podStartE2EDuration="1m39.642597183s" podCreationTimestamp="2025-09-04 23:49:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:50:50.696186608 +0000 UTC m=+105.641270522" watchObservedRunningTime="2025-09-04 23:50:51.642597183 +0000 UTC m=+106.587681087" Sep 4 23:50:51.649253 containerd[1493]: 2025-09-04 23:50:47.785 [INFO][4802] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 23:50:51.649253 containerd[1493]: 2025-09-04 23:50:47.842 [INFO][4802] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--28pvw-eth0 csi-node-driver- calico-system a98fe27d-f9b2-46e3-a9a5-1f9fa577590e 828 0 2025-09-04 23:49:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-28pvw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali368a7a3bed4 [] [] }} ContainerID="c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61" Namespace="calico-system" Pod="csi-node-driver-28pvw" WorkloadEndpoint="localhost-k8s-csi--node--driver--28pvw-" Sep 4 23:50:51.649253 containerd[1493]: 2025-09-04 23:50:47.842 [INFO][4802] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61" Namespace="calico-system" Pod="csi-node-driver-28pvw" WorkloadEndpoint="localhost-k8s-csi--node--driver--28pvw-eth0" Sep 4 23:50:51.649253 containerd[1493]: 2025-09-04 23:50:48.266 [INFO][4886] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61" HandleID="k8s-pod-network.c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61" Workload="localhost-k8s-csi--node--driver--28pvw-eth0" Sep 4 23:50:51.649253 containerd[1493]: 2025-09-04 23:50:48.269 [INFO][4886] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61" HandleID="k8s-pod-network.c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61" Workload="localhost-k8s-csi--node--driver--28pvw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e290), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-28pvw", "timestamp":"2025-09-04 23:50:48.266975341 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 23:50:51.649253 containerd[1493]: 2025-09-04 23:50:48.269 [INFO][4886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:50:51.649253 containerd[1493]: 2025-09-04 23:50:49.825 [INFO][4886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:50:51.649253 containerd[1493]: 2025-09-04 23:50:49.826 [INFO][4886] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 23:50:51.649253 containerd[1493]: 2025-09-04 23:50:49.840 [INFO][4886] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61" host="localhost" Sep 4 23:50:51.649253 containerd[1493]: 2025-09-04 23:50:50.425 [INFO][4886] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 23:50:51.649253 containerd[1493]: 2025-09-04 23:50:50.694 [INFO][4886] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 23:50:51.649253 containerd[1493]: 2025-09-04 23:50:50.705 [INFO][4886] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 23:50:51.649253 containerd[1493]: 2025-09-04 23:50:50.924 [INFO][4886] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 23:50:51.649253 containerd[1493]: 2025-09-04 23:50:50.924 [INFO][4886] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61" host="localhost" Sep 4 23:50:51.649253 containerd[1493]: 2025-09-04 23:50:50.933 [INFO][4886] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61 Sep 4 23:50:51.649253 containerd[1493]: 2025-09-04 23:50:51.169 [INFO][4886] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61" host="localhost" Sep 4 23:50:51.649253 containerd[1493]: 2025-09-04 23:50:51.329 [INFO][4886] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61" host="localhost" Sep 4 23:50:51.649253 containerd[1493]: 2025-09-04 23:50:51.329 [INFO][4886] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61" host="localhost" Sep 4 23:50:51.649253 containerd[1493]: 2025-09-04 23:50:51.329 [INFO][4886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:50:51.649253 containerd[1493]: 2025-09-04 23:50:51.329 [INFO][4886] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61" HandleID="k8s-pod-network.c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61" Workload="localhost-k8s-csi--node--driver--28pvw-eth0" Sep 4 23:50:51.650396 containerd[1493]: 2025-09-04 23:50:51.332 [INFO][4802] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61" Namespace="calico-system" Pod="csi-node-driver-28pvw" WorkloadEndpoint="localhost-k8s-csi--node--driver--28pvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--28pvw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a98fe27d-f9b2-46e3-a9a5-1f9fa577590e", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 49, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-28pvw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali368a7a3bed4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:50:51.650396 containerd[1493]: 2025-09-04 23:50:51.333 [INFO][4802] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61" Namespace="calico-system" Pod="csi-node-driver-28pvw" WorkloadEndpoint="localhost-k8s-csi--node--driver--28pvw-eth0" Sep 4 23:50:51.650396 containerd[1493]: 2025-09-04 23:50:51.333 [INFO][4802] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali368a7a3bed4 ContainerID="c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61" Namespace="calico-system" Pod="csi-node-driver-28pvw" WorkloadEndpoint="localhost-k8s-csi--node--driver--28pvw-eth0" Sep 4 23:50:51.650396 containerd[1493]: 2025-09-04 23:50:51.336 [INFO][4802] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61" Namespace="calico-system" Pod="csi-node-driver-28pvw" WorkloadEndpoint="localhost-k8s-csi--node--driver--28pvw-eth0" Sep 4 23:50:51.650396 containerd[1493]: 2025-09-04 23:50:51.336 [INFO][4802] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61" Namespace="calico-system" Pod="csi-node-driver-28pvw" WorkloadEndpoint="localhost-k8s-csi--node--driver--28pvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--28pvw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a98fe27d-f9b2-46e3-a9a5-1f9fa577590e", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 49, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61", Pod:"csi-node-driver-28pvw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali368a7a3bed4", MAC:"2a:e4:51:1c:df:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:50:51.650396 containerd[1493]: 2025-09-04 23:50:51.644 [INFO][4802] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61" Namespace="calico-system" Pod="csi-node-driver-28pvw" WorkloadEndpoint="localhost-k8s-csi--node--driver--28pvw-eth0" Sep 4 23:50:51.709875 containerd[1493]: time="2025-09-04T23:50:51.709052164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:50:51.710455 containerd[1493]: time="2025-09-04T23:50:51.709928199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:50:51.710455 containerd[1493]: time="2025-09-04T23:50:51.710262879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:51.710635 containerd[1493]: time="2025-09-04T23:50:51.710567792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:51.721446 systemd-networkd[1427]: calif30f6b32ae8: Link UP Sep 4 23:50:51.724812 systemd-networkd[1427]: calif30f6b32ae8: Gained carrier Sep 4 23:50:51.746316 systemd[1]: Started cri-containerd-c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61.scope - libcontainer container c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61. Sep 4 23:50:51.767048 systemd-resolved[1372]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:50:51.783875 systemd[1]: Started sshd@12-10.0.0.62:22-10.0.0.1:59298.service - OpenSSH per-connection server daemon (10.0.0.1:59298). Sep 4 23:50:51.788640 containerd[1493]: time="2025-09-04T23:50:51.788591035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-28pvw,Uid:a98fe27d-f9b2-46e3-a9a5-1f9fa577590e,Namespace:calico-system,Attempt:5,} returns sandbox id \"c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61\"" Sep 4 23:50:51.892572 sshd[5414]: Accepted publickey for core from 10.0.0.1 port 59298 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:50:51.908963 sshd-session[5414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:51.966016 systemd-logind[1475]: New session 13 of user core. Sep 4 23:50:51.973330 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 23:50:52.023445 containerd[1493]: 2025-09-04 23:50:47.710 [INFO][4767] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 23:50:52.023445 containerd[1493]: 2025-09-04 23:50:47.747 [INFO][4767] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--9fcb7bdb4--fjbtd-eth0 calico-kube-controllers-9fcb7bdb4- calico-system 53444f07-cee6-411d-9fba-8985320ac23a 968 0 2025-09-04 23:49:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9fcb7bdb4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-9fcb7bdb4-fjbtd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif30f6b32ae8 [] [] }} ContainerID="421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb" Namespace="calico-system" Pod="calico-kube-controllers-9fcb7bdb4-fjbtd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9fcb7bdb4--fjbtd-" Sep 4 23:50:52.023445 containerd[1493]: 2025-09-04 23:50:47.747 [INFO][4767] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb" Namespace="calico-system" Pod="calico-kube-controllers-9fcb7bdb4-fjbtd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9fcb7bdb4--fjbtd-eth0" Sep 4 23:50:52.023445 containerd[1493]: 2025-09-04 23:50:48.267 [INFO][4841] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb" HandleID="k8s-pod-network.421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb" Workload="localhost-k8s-calico--kube--controllers--9fcb7bdb4--fjbtd-eth0" Sep 4 23:50:52.023445 containerd[1493]: 2025-09-04 23:50:48.268 [INFO][4841] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb" HandleID="k8s-pod-network.421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb" Workload="localhost-k8s-calico--kube--controllers--9fcb7bdb4--fjbtd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bf110), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-9fcb7bdb4-fjbtd", "timestamp":"2025-09-04 23:50:48.267864451 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 23:50:52.023445 containerd[1493]: 2025-09-04 23:50:48.269 [INFO][4841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:50:52.023445 containerd[1493]: 2025-09-04 23:50:51.329 [INFO][4841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:50:52.023445 containerd[1493]: 2025-09-04 23:50:51.329 [INFO][4841] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 23:50:52.023445 containerd[1493]: 2025-09-04 23:50:51.644 [INFO][4841] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb" host="localhost" Sep 4 23:50:52.023445 containerd[1493]: 2025-09-04 23:50:51.652 [INFO][4841] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 23:50:52.023445 containerd[1493]: 2025-09-04 23:50:51.657 [INFO][4841] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 23:50:52.023445 containerd[1493]: 2025-09-04 23:50:51.659 [INFO][4841] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 23:50:52.023445 containerd[1493]: 2025-09-04 23:50:51.661 [INFO][4841] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 23:50:52.023445 containerd[1493]: 2025-09-04 23:50:51.661 [INFO][4841] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb" host="localhost" Sep 4 23:50:52.023445 containerd[1493]: 2025-09-04 23:50:51.663 [INFO][4841] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb Sep 4 23:50:52.023445 containerd[1493]: 2025-09-04 23:50:51.677 [INFO][4841] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb" host="localhost" Sep 4 23:50:52.023445 containerd[1493]: 2025-09-04 23:50:51.704 [INFO][4841] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb" host="localhost" Sep 4 23:50:52.023445 containerd[1493]: 2025-09-04 23:50:51.704 [INFO][4841] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb" host="localhost" Sep 4 23:50:52.023445 containerd[1493]: 2025-09-04 23:50:51.704 [INFO][4841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:50:52.023445 containerd[1493]: 2025-09-04 23:50:51.704 [INFO][4841] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb" HandleID="k8s-pod-network.421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb" Workload="localhost-k8s-calico--kube--controllers--9fcb7bdb4--fjbtd-eth0" Sep 4 23:50:52.024214 containerd[1493]: 2025-09-04 23:50:51.711 [INFO][4767] cni-plugin/k8s.go 418: Populated endpoint ContainerID="421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb" Namespace="calico-system" Pod="calico-kube-controllers-9fcb7bdb4-fjbtd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9fcb7bdb4--fjbtd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9fcb7bdb4--fjbtd-eth0", GenerateName:"calico-kube-controllers-9fcb7bdb4-", Namespace:"calico-system", SelfLink:"", UID:"53444f07-cee6-411d-9fba-8985320ac23a", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 49, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9fcb7bdb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-9fcb7bdb4-fjbtd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif30f6b32ae8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:50:52.024214 containerd[1493]: 2025-09-04 23:50:51.711 [INFO][4767] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb" Namespace="calico-system" Pod="calico-kube-controllers-9fcb7bdb4-fjbtd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9fcb7bdb4--fjbtd-eth0" Sep 4 23:50:52.024214 containerd[1493]: 2025-09-04 23:50:51.711 [INFO][4767] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif30f6b32ae8 ContainerID="421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb" Namespace="calico-system" Pod="calico-kube-controllers-9fcb7bdb4-fjbtd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9fcb7bdb4--fjbtd-eth0" Sep 4 23:50:52.024214 containerd[1493]: 2025-09-04 23:50:51.722 [INFO][4767] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb" Namespace="calico-system" Pod="calico-kube-controllers-9fcb7bdb4-fjbtd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9fcb7bdb4--fjbtd-eth0" Sep 4 23:50:52.024214 containerd[1493]: 2025-09-04 23:50:51.723 [INFO][4767] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb" Namespace="calico-system" Pod="calico-kube-controllers-9fcb7bdb4-fjbtd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9fcb7bdb4--fjbtd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9fcb7bdb4--fjbtd-eth0", GenerateName:"calico-kube-controllers-9fcb7bdb4-", Namespace:"calico-system", SelfLink:"", UID:"53444f07-cee6-411d-9fba-8985320ac23a", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 49, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9fcb7bdb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb", Pod:"calico-kube-controllers-9fcb7bdb4-fjbtd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif30f6b32ae8", MAC:"ea:15:9c:0b:b1:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:50:52.024214 containerd[1493]: 2025-09-04 23:50:52.016 [INFO][4767] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb" Namespace="calico-system" Pod="calico-kube-controllers-9fcb7bdb4-fjbtd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9fcb7bdb4--fjbtd-eth0" Sep 4 23:50:52.099194 systemd-networkd[1427]: cali657c20990c7: Link UP Sep 4 23:50:52.106858 kubelet[2748]: E0904 23:50:52.106803 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:52.108216 systemd-networkd[1427]: cali657c20990c7: Gained carrier Sep 4 23:50:52.119963 containerd[1493]: time="2025-09-04T23:50:52.119493857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:50:52.119963 containerd[1493]: time="2025-09-04T23:50:52.119570704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:50:52.119963 containerd[1493]: time="2025-09-04T23:50:52.119585353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:52.119963 containerd[1493]: time="2025-09-04T23:50:52.119680925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:52.148339 systemd[1]: Started cri-containerd-421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb.scope - libcontainer container 421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb. Sep 4 23:50:52.165959 systemd-resolved[1372]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:50:52.196293 containerd[1493]: time="2025-09-04T23:50:52.196251102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fcb7bdb4-fjbtd,Uid:53444f07-cee6-411d-9fba-8985320ac23a,Namespace:calico-system,Attempt:3,} returns sandbox id \"421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb\"" Sep 4 23:50:52.493070 containerd[1493]: 2025-09-04 23:50:47.559 [INFO][4727] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 23:50:52.493070 containerd[1493]: 2025-09-04 23:50:47.689 [INFO][4727] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--q7s5n-eth0 coredns-674b8bbfcf- kube-system 72827434-065b-42a4-a2fc-ad0f0cccd362 972 0 2025-09-04 23:49:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-q7s5n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali657c20990c7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-q7s5n" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q7s5n-" Sep 4 23:50:52.493070 containerd[1493]: 2025-09-04 23:50:47.689 [INFO][4727] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-q7s5n" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q7s5n-eth0" Sep 4 23:50:52.493070 containerd[1493]: 2025-09-04 23:50:48.268 [INFO][4821] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9" HandleID="k8s-pod-network.18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9" Workload="localhost-k8s-coredns--674b8bbfcf--q7s5n-eth0" Sep 4 23:50:52.493070 containerd[1493]: 2025-09-04 23:50:48.269 [INFO][4821] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9" HandleID="k8s-pod-network.18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9" Workload="localhost-k8s-coredns--674b8bbfcf--q7s5n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000188920), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-q7s5n", "timestamp":"2025-09-04 23:50:48.268880935 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 23:50:52.493070 containerd[1493]: 2025-09-04 23:50:48.269 [INFO][4821] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:50:52.493070 containerd[1493]: 2025-09-04 23:50:51.705 [INFO][4821] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:50:52.493070 containerd[1493]: 2025-09-04 23:50:51.705 [INFO][4821] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 23:50:52.493070 containerd[1493]: 2025-09-04 23:50:51.835 [INFO][4821] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9" host="localhost" Sep 4 23:50:52.493070 containerd[1493]: 2025-09-04 23:50:52.021 [INFO][4821] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 23:50:52.493070 containerd[1493]: 2025-09-04 23:50:52.031 [INFO][4821] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 23:50:52.493070 containerd[1493]: 2025-09-04 23:50:52.038 [INFO][4821] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 23:50:52.493070 containerd[1493]: 2025-09-04 23:50:52.041 [INFO][4821] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 23:50:52.493070 containerd[1493]: 2025-09-04 23:50:52.041 [INFO][4821] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9" host="localhost" Sep 4 23:50:52.493070 containerd[1493]: 2025-09-04 23:50:52.044 [INFO][4821] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9 Sep 4 23:50:52.493070 containerd[1493]: 2025-09-04 23:50:52.057 [INFO][4821] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9" host="localhost" Sep 4 23:50:52.493070 containerd[1493]: 2025-09-04 23:50:52.076 [INFO][4821] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9" host="localhost" Sep 4 23:50:52.493070 containerd[1493]: 2025-09-04 23:50:52.076 [INFO][4821] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9" host="localhost" Sep 4 23:50:52.493070 containerd[1493]: 2025-09-04 23:50:52.076 [INFO][4821] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:50:52.493070 containerd[1493]: 2025-09-04 23:50:52.076 [INFO][4821] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9" HandleID="k8s-pod-network.18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9" Workload="localhost-k8s-coredns--674b8bbfcf--q7s5n-eth0" Sep 4 23:50:52.494164 containerd[1493]: 2025-09-04 23:50:52.082 [INFO][4727] cni-plugin/k8s.go 418: Populated endpoint ContainerID="18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-q7s5n" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q7s5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--q7s5n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"72827434-065b-42a4-a2fc-ad0f0cccd362", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-q7s5n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali657c20990c7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:50:52.494164 containerd[1493]: 2025-09-04 23:50:52.084 [INFO][4727] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-q7s5n" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q7s5n-eth0" Sep 4 23:50:52.494164 containerd[1493]: 2025-09-04 23:50:52.084 [INFO][4727] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali657c20990c7 ContainerID="18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-q7s5n" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q7s5n-eth0" Sep 4 23:50:52.494164 containerd[1493]: 2025-09-04 23:50:52.113 [INFO][4727] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-q7s5n" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q7s5n-eth0" Sep 4 23:50:52.494164 containerd[1493]: 2025-09-04 23:50:52.116 [INFO][4727] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-q7s5n" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q7s5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--q7s5n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"72827434-065b-42a4-a2fc-ad0f0cccd362", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9", Pod:"coredns-674b8bbfcf-q7s5n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali657c20990c7", MAC:"e6:a6:e0:03:e0:0e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:50:52.494164 containerd[1493]: 2025-09-04 23:50:52.487 [INFO][4727] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9" Namespace="kube-system" Pod="coredns-674b8bbfcf-q7s5n" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q7s5n-eth0" Sep 4 23:50:52.503279 sshd[5417]: Connection closed by 10.0.0.1 port 59298 Sep 4 23:50:52.503979 sshd-session[5414]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:52.508545 systemd[1]: sshd@12-10.0.0.62:22-10.0.0.1:59298.service: Deactivated successfully. Sep 4 23:50:52.510982 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 23:50:52.513067 systemd-logind[1475]: Session 13 logged out. Waiting for processes to exit. Sep 4 23:50:52.514863 systemd-logind[1475]: Removed session 13. Sep 4 23:50:52.778433 systemd-networkd[1427]: calif30f6b32ae8: Gained IPv6LL Sep 4 23:50:52.996026 containerd[1493]: time="2025-09-04T23:50:52.995186625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:50:52.996026 containerd[1493]: time="2025-09-04T23:50:52.995978198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:50:52.996026 containerd[1493]: time="2025-09-04T23:50:52.995994899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:52.996553 containerd[1493]: time="2025-09-04T23:50:52.996097846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:53.026376 systemd[1]: Started cri-containerd-18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9.scope - libcontainer container 18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9. Sep 4 23:50:53.041975 systemd-resolved[1372]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:50:53.069312 containerd[1493]: time="2025-09-04T23:50:53.069272688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q7s5n,Uid:72827434-065b-42a4-a2fc-ad0f0cccd362,Namespace:kube-system,Attempt:3,} returns sandbox id \"18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9\"" Sep 4 23:50:53.070058 kubelet[2748]: E0904 23:50:53.070035 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:53.226374 systemd-networkd[1427]: cali368a7a3bed4: Gained IPv6LL Sep 4 23:50:53.226832 systemd-networkd[1427]: cali657c20990c7: Gained IPv6LL Sep 4 23:50:53.245332 containerd[1493]: time="2025-09-04T23:50:53.245269772Z" level=info msg="CreateContainer within sandbox \"18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:50:53.406328 systemd-networkd[1427]: cali70e06c82b49: Link UP Sep 4 23:50:53.408483 systemd-networkd[1427]: cali70e06c82b49: Gained carrier Sep 4 23:50:53.522612 containerd[1493]: 2025-09-04 23:50:47.802 [INFO][4824] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 23:50:53.522612 containerd[1493]: 2025-09-04 23:50:47.835 [INFO][4824] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--78c45fc5f4--77ks8-eth0 calico-apiserver-78c45fc5f4- calico-apiserver d64b33cd-4dd7-45f1-991e-5a59b10c693d 977 0 2025-09-04 23:49:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78c45fc5f4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-78c45fc5f4-77ks8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali70e06c82b49 [] [] }} ContainerID="af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94" Namespace="calico-apiserver" Pod="calico-apiserver-78c45fc5f4-77ks8" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c45fc5f4--77ks8-" Sep 4 23:50:53.522612 containerd[1493]: 2025-09-04 23:50:47.835 [INFO][4824] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94" Namespace="calico-apiserver" Pod="calico-apiserver-78c45fc5f4-77ks8" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c45fc5f4--77ks8-eth0" Sep 4 23:50:53.522612 containerd[1493]: 2025-09-04 23:50:48.269 [INFO][4856] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94" HandleID="k8s-pod-network.af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94" Workload="localhost-k8s-calico--apiserver--78c45fc5f4--77ks8-eth0" Sep 4 23:50:53.522612 containerd[1493]: 2025-09-04 23:50:48.269 [INFO][4856] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94" HandleID="k8s-pod-network.af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94" Workload="localhost-k8s-calico--apiserver--78c45fc5f4--77ks8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e820), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-78c45fc5f4-77ks8", "timestamp":"2025-09-04 23:50:48.269601865 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 23:50:53.522612 containerd[1493]: 2025-09-04 23:50:48.269 [INFO][4856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:50:53.522612 containerd[1493]: 2025-09-04 23:50:52.077 [INFO][4856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:50:53.522612 containerd[1493]: 2025-09-04 23:50:52.077 [INFO][4856] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 23:50:53.522612 containerd[1493]: 2025-09-04 23:50:52.108 [INFO][4856] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94" host="localhost" Sep 4 23:50:53.522612 containerd[1493]: 2025-09-04 23:50:52.796 [INFO][4856] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 23:50:53.522612 containerd[1493]: 2025-09-04 23:50:53.057 [INFO][4856] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 23:50:53.522612 containerd[1493]: 2025-09-04 23:50:53.119 [INFO][4856] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 23:50:53.522612 containerd[1493]: 2025-09-04 23:50:53.122 [INFO][4856] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 23:50:53.522612 containerd[1493]: 2025-09-04 23:50:53.122 [INFO][4856] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94" host="localhost" Sep 4 23:50:53.522612 containerd[1493]: 2025-09-04 23:50:53.140 [INFO][4856] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94 Sep 4 23:50:53.522612 containerd[1493]: 2025-09-04 23:50:53.236 [INFO][4856] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94" host="localhost" Sep 4 23:50:53.522612 containerd[1493]: 2025-09-04 23:50:53.397 [INFO][4856] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94" host="localhost" Sep 4 23:50:53.522612 containerd[1493]: 2025-09-04 23:50:53.397 [INFO][4856] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94" host="localhost" Sep 4 23:50:53.522612 containerd[1493]: 2025-09-04 23:50:53.398 [INFO][4856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:50:53.522612 containerd[1493]: 2025-09-04 23:50:53.398 [INFO][4856] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94" HandleID="k8s-pod-network.af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94" Workload="localhost-k8s-calico--apiserver--78c45fc5f4--77ks8-eth0" Sep 4 23:50:53.523527 containerd[1493]: 2025-09-04 23:50:53.402 [INFO][4824] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94" Namespace="calico-apiserver" Pod="calico-apiserver-78c45fc5f4-77ks8" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c45fc5f4--77ks8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78c45fc5f4--77ks8-eth0", GenerateName:"calico-apiserver-78c45fc5f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d64b33cd-4dd7-45f1-991e-5a59b10c693d", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78c45fc5f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-78c45fc5f4-77ks8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali70e06c82b49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:50:53.523527 containerd[1493]: 2025-09-04 23:50:53.402 [INFO][4824] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94" Namespace="calico-apiserver" Pod="calico-apiserver-78c45fc5f4-77ks8" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c45fc5f4--77ks8-eth0" Sep 4 23:50:53.523527 containerd[1493]: 2025-09-04 23:50:53.402 [INFO][4824] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70e06c82b49 ContainerID="af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94" Namespace="calico-apiserver" Pod="calico-apiserver-78c45fc5f4-77ks8" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c45fc5f4--77ks8-eth0" Sep 4 23:50:53.523527 containerd[1493]: 2025-09-04 23:50:53.410 [INFO][4824] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94" Namespace="calico-apiserver" Pod="calico-apiserver-78c45fc5f4-77ks8" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c45fc5f4--77ks8-eth0" Sep 4 23:50:53.523527 containerd[1493]: 2025-09-04 23:50:53.411 [INFO][4824] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94" Namespace="calico-apiserver" Pod="calico-apiserver-78c45fc5f4-77ks8" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c45fc5f4--77ks8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78c45fc5f4--77ks8-eth0", GenerateName:"calico-apiserver-78c45fc5f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d64b33cd-4dd7-45f1-991e-5a59b10c693d", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78c45fc5f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94", Pod:"calico-apiserver-78c45fc5f4-77ks8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali70e06c82b49", MAC:"ce:9e:15:ca:a7:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:50:53.523527 containerd[1493]: 2025-09-04 23:50:53.486 [INFO][4824] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94" Namespace="calico-apiserver" Pod="calico-apiserver-78c45fc5f4-77ks8" WorkloadEndpoint="localhost-k8s-calico--apiserver--78c45fc5f4--77ks8-eth0" Sep 4 23:50:53.619688 containerd[1493]: time="2025-09-04T23:50:53.618543959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:50:53.619688 containerd[1493]: time="2025-09-04T23:50:53.618742618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:50:53.619688 containerd[1493]: time="2025-09-04T23:50:53.618764761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:53.619688 containerd[1493]: time="2025-09-04T23:50:53.618897854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:53.653513 systemd[1]: Started cri-containerd-af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94.scope - libcontainer container af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94. Sep 4 23:50:53.675119 systemd-resolved[1372]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:50:53.703252 containerd[1493]: time="2025-09-04T23:50:53.703184656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78c45fc5f4-77ks8,Uid:d64b33cd-4dd7-45f1-991e-5a59b10c693d,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94\"" Sep 4 23:50:53.941790 containerd[1493]: time="2025-09-04T23:50:53.941376570Z" level=info msg="CreateContainer within sandbox \"18e1b1f480c21062816c556507fe0d05cb4fade6f9cfa425fe0c44cfc2d547b9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"39abc1a7ebd9b60f9125cea43b4d322b8cc83b6f8ff9be171454b681d985605f\"" Sep 4 23:50:53.942676 containerd[1493]: time="2025-09-04T23:50:53.942588165Z" level=info msg="StartContainer for \"39abc1a7ebd9b60f9125cea43b4d322b8cc83b6f8ff9be171454b681d985605f\"" Sep 4 23:50:53.977441 systemd[1]: Started cri-containerd-39abc1a7ebd9b60f9125cea43b4d322b8cc83b6f8ff9be171454b681d985605f.scope - libcontainer container 39abc1a7ebd9b60f9125cea43b4d322b8cc83b6f8ff9be171454b681d985605f. Sep 4 23:50:54.045689 containerd[1493]: time="2025-09-04T23:50:54.045610856Z" level=info msg="StartContainer for \"39abc1a7ebd9b60f9125cea43b4d322b8cc83b6f8ff9be171454b681d985605f\" returns successfully" Sep 4 23:50:54.129754 kubelet[2748]: E0904 23:50:54.129713 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:54.130355 containerd[1493]: time="2025-09-04T23:50:54.130296005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:54.132511 containerd[1493]: time="2025-09-04T23:50:54.132444158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 4 23:50:54.135064 containerd[1493]: time="2025-09-04T23:50:54.135009629Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:54.140143 containerd[1493]: time="2025-09-04T23:50:54.140091066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:54.141991 containerd[1493]: time="2025-09-04T23:50:54.141817985Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 5.00244676s" Sep 4 23:50:54.141991 containerd[1493]: time="2025-09-04T23:50:54.141854554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 4 23:50:54.144008 containerd[1493]: time="2025-09-04T23:50:54.143973692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 4 23:50:54.149879 containerd[1493]: time="2025-09-04T23:50:54.149815913Z" level=info msg="CreateContainer within sandbox \"2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 4 23:50:54.170852 kubelet[2748]: I0904 23:50:54.170761 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-q7s5n" podStartSLOduration=102.17073839 podStartE2EDuration="1m42.17073839s" podCreationTimestamp="2025-09-04 23:49:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:50:54.147412191 +0000 UTC m=+109.092496095" watchObservedRunningTime="2025-09-04 23:50:54.17073839 +0000 UTC m=+109.115822294" Sep 4 23:50:54.206005 containerd[1493]: time="2025-09-04T23:50:54.205822920Z" level=info msg="CreateContainer within sandbox \"2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4bca5248225c2e308a0f30d362dcc30d40b8e4ad78bb3d808b27356abeec4edb\"" Sep 4 23:50:54.206784 containerd[1493]: time="2025-09-04T23:50:54.206752636Z" level=info msg="StartContainer for \"4bca5248225c2e308a0f30d362dcc30d40b8e4ad78bb3d808b27356abeec4edb\"" Sep 4 23:50:54.251966 systemd[1]: run-containerd-runc-k8s.io-4bca5248225c2e308a0f30d362dcc30d40b8e4ad78bb3d808b27356abeec4edb-runc.g00TbF.mount: Deactivated successfully. Sep 4 23:50:54.264551 systemd[1]: Started cri-containerd-4bca5248225c2e308a0f30d362dcc30d40b8e4ad78bb3d808b27356abeec4edb.scope - libcontainer container 4bca5248225c2e308a0f30d362dcc30d40b8e4ad78bb3d808b27356abeec4edb. Sep 4 23:50:54.341855 containerd[1493]: time="2025-09-04T23:50:54.341787024Z" level=info msg="StartContainer for \"4bca5248225c2e308a0f30d362dcc30d40b8e4ad78bb3d808b27356abeec4edb\" returns successfully" Sep 4 23:50:54.762358 systemd-networkd[1427]: cali70e06c82b49: Gained IPv6LL Sep 4 23:50:55.137277 kubelet[2748]: E0904 23:50:55.136599 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:56.138609 kubelet[2748]: E0904 23:50:56.138548 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:56.151411 kubelet[2748]: E0904 23:50:56.151357 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:57.228627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount506660266.mount: Deactivated successfully. Sep 4 23:50:57.519969 systemd[1]: Started sshd@13-10.0.0.62:22-10.0.0.1:59306.service - OpenSSH per-connection server daemon (10.0.0.1:59306). Sep 4 23:50:57.633498 sshd[5685]: Accepted publickey for core from 10.0.0.1 port 59306 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:50:57.635932 sshd-session[5685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:50:57.642237 systemd-logind[1475]: New session 14 of user core. Sep 4 23:50:57.648361 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 23:50:57.895112 sshd[5687]: Connection closed by 10.0.0.1 port 59306 Sep 4 23:50:57.896359 sshd-session[5685]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:57.900724 systemd[1]: sshd@13-10.0.0.62:22-10.0.0.1:59306.service: Deactivated successfully. Sep 4 23:50:57.904797 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 23:50:57.907431 systemd-logind[1475]: Session 14 logged out. Waiting for processes to exit. Sep 4 23:50:57.908994 systemd-logind[1475]: Removed session 14. Sep 4 23:50:58.567337 containerd[1493]: time="2025-09-04T23:50:58.567136096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:58.569101 containerd[1493]: time="2025-09-04T23:50:58.569021154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 4 23:50:58.570857 containerd[1493]: time="2025-09-04T23:50:58.570717171Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:58.574166 containerd[1493]: time="2025-09-04T23:50:58.574114115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:58.574965 containerd[1493]: time="2025-09-04T23:50:58.574935653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.430920242s" Sep 4 23:50:58.575025 containerd[1493]: time="2025-09-04T23:50:58.574981932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 4 23:50:58.578357 containerd[1493]: time="2025-09-04T23:50:58.578313911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 23:50:58.584208 containerd[1493]: time="2025-09-04T23:50:58.584137487Z" level=info msg="CreateContainer within sandbox \"9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 4 23:50:58.607316 containerd[1493]: time="2025-09-04T23:50:58.607249253Z" level=info msg="CreateContainer within sandbox \"9e16cbf5a9c20b53a235cfc07965d89808781afdac1028907244c1f97292726d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d3c4f2188e3a0310feb441effb02acb00bc5f8af3c7f9ac189cc705f3deb3121\"" Sep 4 23:50:58.608070 containerd[1493]: time="2025-09-04T23:50:58.608034401Z" level=info msg="StartContainer for \"d3c4f2188e3a0310feb441effb02acb00bc5f8af3c7f9ac189cc705f3deb3121\"" Sep 4 23:50:58.646418 systemd[1]: Started cri-containerd-d3c4f2188e3a0310feb441effb02acb00bc5f8af3c7f9ac189cc705f3deb3121.scope - libcontainer container d3c4f2188e3a0310feb441effb02acb00bc5f8af3c7f9ac189cc705f3deb3121. Sep 4 23:50:58.704730 containerd[1493]: time="2025-09-04T23:50:58.704670850Z" level=info msg="StartContainer for \"d3c4f2188e3a0310feb441effb02acb00bc5f8af3c7f9ac189cc705f3deb3121\" returns successfully" Sep 4 23:51:00.151001 kubelet[2748]: E0904 23:51:00.150915 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:02.108941 kubelet[2748]: E0904 23:51:02.108896 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:02.274693 kubelet[2748]: I0904 23:51:02.274523 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-d8krf" podStartSLOduration=72.717715377 podStartE2EDuration="1m21.274501133s" podCreationTimestamp="2025-09-04 23:49:41 +0000 UTC" firstStartedPulling="2025-09-04 23:50:50.019136642 +0000 UTC m=+104.964220546" lastFinishedPulling="2025-09-04 23:50:58.575922397 +0000 UTC m=+113.521006302" observedRunningTime="2025-09-04 23:50:59.24327464 +0000 UTC m=+114.188358544" watchObservedRunningTime="2025-09-04 23:51:02.274501133 +0000 UTC m=+117.219585037" Sep 4 23:51:02.910452 systemd[1]: Started sshd@14-10.0.0.62:22-10.0.0.1:34326.service - OpenSSH per-connection server daemon (10.0.0.1:34326). Sep 4 23:51:04.072619 sshd[5806]: Accepted publickey for core from 10.0.0.1 port 34326 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:51:04.074785 sshd-session[5806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:04.082939 systemd-logind[1475]: New session 15 of user core. Sep 4 23:51:04.090470 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 23:51:04.275943 sshd[5814]: Connection closed by 10.0.0.1 port 34326 Sep 4 23:51:04.274057 sshd-session[5806]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:04.287458 systemd[1]: sshd@14-10.0.0.62:22-10.0.0.1:34326.service: Deactivated successfully. Sep 4 23:51:04.292554 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 23:51:04.296294 systemd-logind[1475]: Session 15 logged out. Waiting for processes to exit. Sep 4 23:51:04.304863 systemd[1]: Started sshd@15-10.0.0.62:22-10.0.0.1:34336.service - OpenSSH per-connection server daemon (10.0.0.1:34336). Sep 4 23:51:04.305760 systemd-logind[1475]: Removed session 15. Sep 4 23:51:04.359679 sshd[5828]: Accepted publickey for core from 10.0.0.1 port 34336 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:51:04.361945 sshd-session[5828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:04.369482 systemd-logind[1475]: New session 16 of user core. Sep 4 23:51:04.382437 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 23:51:05.163828 containerd[1493]: time="2025-09-04T23:51:05.163563600Z" level=info msg="StopPodSandbox for \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\"" Sep 4 23:51:05.163828 containerd[1493]: time="2025-09-04T23:51:05.163730779Z" level=info msg="TearDown network for sandbox \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\" successfully" Sep 4 23:51:05.163828 containerd[1493]: time="2025-09-04T23:51:05.163747450Z" level=info msg="StopPodSandbox for \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\" returns successfully" Sep 4 23:51:05.164477 containerd[1493]: time="2025-09-04T23:51:05.164083170Z" level=info msg="RemovePodSandbox for \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\"" Sep 4 23:51:05.167987 containerd[1493]: time="2025-09-04T23:51:05.167944351Z" level=info msg="Forcibly stopping sandbox \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\"" Sep 4 23:51:05.168108 containerd[1493]: time="2025-09-04T23:51:05.168049682Z" level=info msg="TearDown network for sandbox \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\" successfully" Sep 4 23:51:05.807372 sshd[5831]: Connection closed by 10.0.0.1 port 34336 Sep 4 23:51:05.807945 sshd-session[5828]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:05.829586 systemd[1]: Started sshd@16-10.0.0.62:22-10.0.0.1:34342.service - OpenSSH per-connection server daemon (10.0.0.1:34342). Sep 4 23:51:05.830301 systemd[1]: sshd@15-10.0.0.62:22-10.0.0.1:34336.service: Deactivated successfully. Sep 4 23:51:05.833407 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 23:51:05.836811 systemd-logind[1475]: Session 16 logged out. Waiting for processes to exit. Sep 4 23:51:05.840051 systemd-logind[1475]: Removed session 16. Sep 4 23:51:05.863579 containerd[1493]: time="2025-09-04T23:51:05.863487977Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:05.863879 containerd[1493]: time="2025-09-04T23:51:05.863622573Z" level=info msg="RemovePodSandbox \"62dbe580a9b773cc0a55f276be99b8d2f94ffc74fc51d095c6c69801b5ca6cf4\" returns successfully" Sep 4 23:51:05.864649 containerd[1493]: time="2025-09-04T23:51:05.864597452Z" level=info msg="StopPodSandbox for \"28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de\"" Sep 4 23:51:05.864791 containerd[1493]: time="2025-09-04T23:51:05.864764671Z" level=info msg="TearDown network for sandbox \"28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de\" successfully" Sep 4 23:51:05.864791 containerd[1493]: time="2025-09-04T23:51:05.864784098Z" level=info msg="StopPodSandbox for \"28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de\" returns successfully" Sep 4 23:51:05.865348 containerd[1493]: time="2025-09-04T23:51:05.865243565Z" level=info msg="RemovePodSandbox for \"28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de\"" Sep 4 23:51:05.865348 containerd[1493]: time="2025-09-04T23:51:05.865289472Z" level=info msg="Forcibly stopping sandbox \"28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de\"" Sep 4 23:51:05.865748 containerd[1493]: time="2025-09-04T23:51:05.865442243Z" level=info msg="TearDown network for sandbox \"28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de\" successfully" Sep 4 23:51:05.937596 sshd[5843]: Accepted publickey for core from 10.0.0.1 port 34342 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:51:05.940031 sshd-session[5843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:05.946141 systemd-logind[1475]: New session 17 of user core. Sep 4 23:51:05.952449 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 23:51:06.027814 containerd[1493]: time="2025-09-04T23:51:06.027732228Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:06.028073 containerd[1493]: time="2025-09-04T23:51:06.027864450Z" level=info msg="RemovePodSandbox \"28eb0a182153ad8068a0264ed0314d46ff402137a8abec4ecd5d76e66a9f66de\" returns successfully" Sep 4 23:51:06.030046 containerd[1493]: time="2025-09-04T23:51:06.029740016Z" level=info msg="StopPodSandbox for \"94a73c1404240a627b007234a5c7fb472280406044f84368e9b1453d6b4b4455\"" Sep 4 23:51:06.030046 containerd[1493]: time="2025-09-04T23:51:06.029925529Z" level=info msg="TearDown network for sandbox \"94a73c1404240a627b007234a5c7fb472280406044f84368e9b1453d6b4b4455\" successfully" Sep 4 23:51:06.030046 containerd[1493]: time="2025-09-04T23:51:06.029947190Z" level=info msg="StopPodSandbox for \"94a73c1404240a627b007234a5c7fb472280406044f84368e9b1453d6b4b4455\" returns successfully" Sep 4 23:51:06.031299 containerd[1493]: time="2025-09-04T23:51:06.031218183Z" level=info msg="RemovePodSandbox for \"94a73c1404240a627b007234a5c7fb472280406044f84368e9b1453d6b4b4455\"" Sep 4 23:51:06.031299 containerd[1493]: time="2025-09-04T23:51:06.031256065Z" level=info msg="Forcibly stopping sandbox \"94a73c1404240a627b007234a5c7fb472280406044f84368e9b1453d6b4b4455\"" Sep 4 23:51:06.031937 containerd[1493]: time="2025-09-04T23:51:06.031365895Z" level=info msg="TearDown network for sandbox \"94a73c1404240a627b007234a5c7fb472280406044f84368e9b1453d6b4b4455\" successfully" Sep 4 23:51:06.046332 containerd[1493]: time="2025-09-04T23:51:06.046136940Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94a73c1404240a627b007234a5c7fb472280406044f84368e9b1453d6b4b4455\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:06.046332 containerd[1493]: time="2025-09-04T23:51:06.046229687Z" level=info msg="RemovePodSandbox \"94a73c1404240a627b007234a5c7fb472280406044f84368e9b1453d6b4b4455\" returns successfully" Sep 4 23:51:06.047838 containerd[1493]: time="2025-09-04T23:51:06.047705199Z" level=info msg="StopPodSandbox for \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\"" Sep 4 23:51:06.047934 containerd[1493]: time="2025-09-04T23:51:06.047845116Z" level=info msg="TearDown network for sandbox \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\" successfully" Sep 4 23:51:06.047934 containerd[1493]: time="2025-09-04T23:51:06.047856949Z" level=info msg="StopPodSandbox for \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\" returns successfully" Sep 4 23:51:06.048679 containerd[1493]: time="2025-09-04T23:51:06.048570419Z" level=info msg="RemovePodSandbox for \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\"" Sep 4 23:51:06.048679 containerd[1493]: time="2025-09-04T23:51:06.048597841Z" level=info msg="Forcibly stopping sandbox \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\"" Sep 4 23:51:06.048821 containerd[1493]: time="2025-09-04T23:51:06.048697150Z" level=info msg="TearDown network for sandbox \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\" successfully" Sep 4 23:51:06.056847 containerd[1493]: time="2025-09-04T23:51:06.056593020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:06.056847 containerd[1493]: time="2025-09-04T23:51:06.056701958Z" level=info msg="RemovePodSandbox \"d603932b1c29c6698b478388a4606f10fee01740bfa67c776d2b730c19bb2c7d\" returns successfully" Sep 4 23:51:06.058329 containerd[1493]: time="2025-09-04T23:51:06.057877389Z" level=info msg="StopPodSandbox for \"5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63\"" Sep 4 23:51:06.058329 containerd[1493]: time="2025-09-04T23:51:06.058022254Z" level=info msg="TearDown network for sandbox \"5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63\" successfully" Sep 4 23:51:06.058329 containerd[1493]: time="2025-09-04T23:51:06.058037213Z" level=info msg="StopPodSandbox for \"5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63\" returns successfully" Sep 4 23:51:06.062193 containerd[1493]: time="2025-09-04T23:51:06.059376366Z" level=info msg="RemovePodSandbox for \"5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63\"" Sep 4 23:51:06.062193 containerd[1493]: time="2025-09-04T23:51:06.059412054Z" level=info msg="Forcibly stopping sandbox \"5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63\"" Sep 4 23:51:06.062193 containerd[1493]: time="2025-09-04T23:51:06.059507586Z" level=info msg="TearDown network for sandbox \"5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63\" successfully" Sep 4 23:51:06.066718 containerd[1493]: time="2025-09-04T23:51:06.066627738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:06.111109 sshd[5848]: Connection closed by 10.0.0.1 port 34342 Sep 4 23:51:06.111638 sshd-session[5843]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:06.116989 systemd[1]: sshd@16-10.0.0.62:22-10.0.0.1:34342.service: Deactivated successfully. Sep 4 23:51:06.119637 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 23:51:06.120478 systemd-logind[1475]: Session 17 logged out. Waiting for processes to exit. Sep 4 23:51:06.121639 systemd-logind[1475]: Removed session 17. Sep 4 23:51:06.138436 containerd[1493]: time="2025-09-04T23:51:06.138235671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 4 23:51:06.184163 containerd[1493]: time="2025-09-04T23:51:06.183940810Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:06.184163 containerd[1493]: time="2025-09-04T23:51:06.184026674Z" level=info msg="RemovePodSandbox \"5337c56b905a4ba9d986253b5688933e97b153059a80f61ac367e1e86236fc63\" returns successfully" Sep 4 23:51:06.184807 containerd[1493]: time="2025-09-04T23:51:06.184488434Z" level=info msg="StopPodSandbox for \"adb3d1ad720be189a69fd743c12a21d3582121aad430227e8c8434af0a2cc0c0\"" Sep 4 23:51:06.184807 containerd[1493]: time="2025-09-04T23:51:06.184721518Z" level=info msg="TearDown network for sandbox \"adb3d1ad720be189a69fd743c12a21d3582121aad430227e8c8434af0a2cc0c0\" successfully" Sep 4 23:51:06.184807 containerd[1493]: time="2025-09-04T23:51:06.184739252Z" level=info msg="StopPodSandbox for \"adb3d1ad720be189a69fd743c12a21d3582121aad430227e8c8434af0a2cc0c0\" returns successfully" Sep 4 23:51:06.185895 containerd[1493]: time="2025-09-04T23:51:06.185841683Z" level=info msg="RemovePodSandbox for \"adb3d1ad720be189a69fd743c12a21d3582121aad430227e8c8434af0a2cc0c0\"" Sep 4 23:51:06.185895 containerd[1493]: time="2025-09-04T23:51:06.185869226Z" level=info msg="Forcibly stopping sandbox \"adb3d1ad720be189a69fd743c12a21d3582121aad430227e8c8434af0a2cc0c0\"" Sep 4 23:51:06.186034 containerd[1493]: time="2025-09-04T23:51:06.185995737Z" level=info msg="TearDown network for sandbox \"adb3d1ad720be189a69fd743c12a21d3582121aad430227e8c8434af0a2cc0c0\" successfully" Sep 4 23:51:06.225983 containerd[1493]: time="2025-09-04T23:51:06.225910609Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:06.357325 containerd[1493]: time="2025-09-04T23:51:06.357102957Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"adb3d1ad720be189a69fd743c12a21d3582121aad430227e8c8434af0a2cc0c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:06.357325 containerd[1493]: time="2025-09-04T23:51:06.357241832Z" level=info msg="RemovePodSandbox \"adb3d1ad720be189a69fd743c12a21d3582121aad430227e8c8434af0a2cc0c0\" returns successfully" Sep 4 23:51:06.357970 containerd[1493]: time="2025-09-04T23:51:06.357835964Z" level=info msg="StopPodSandbox for \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\"" Sep 4 23:51:06.358039 containerd[1493]: time="2025-09-04T23:51:06.358004225Z" level=info msg="TearDown network for sandbox \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\" successfully" Sep 4 23:51:06.358039 containerd[1493]: time="2025-09-04T23:51:06.358018242Z" level=info msg="StopPodSandbox for \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\" returns successfully" Sep 4 23:51:06.358626 containerd[1493]: time="2025-09-04T23:51:06.358467929Z" level=info msg="RemovePodSandbox for \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\"" Sep 4 23:51:06.358626 containerd[1493]: time="2025-09-04T23:51:06.358499239Z" level=info msg="Forcibly stopping sandbox \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\"" Sep 4 23:51:06.358738 containerd[1493]: time="2025-09-04T23:51:06.358659765Z" level=info msg="TearDown network for sandbox \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\" successfully" Sep 4 23:51:06.424023 containerd[1493]: time="2025-09-04T23:51:06.423905401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:06.425499 containerd[1493]: time="2025-09-04T23:51:06.425429425Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 7.847069315s" Sep 4 23:51:06.425499 containerd[1493]: time="2025-09-04T23:51:06.425487075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 23:51:06.426679 containerd[1493]: time="2025-09-04T23:51:06.426655832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 4 23:51:06.690240 containerd[1493]: time="2025-09-04T23:51:06.690164199Z" level=info msg="CreateContainer within sandbox \"c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 23:51:06.800022 containerd[1493]: time="2025-09-04T23:51:06.799845284Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:06.800022 containerd[1493]: time="2025-09-04T23:51:06.799987536Z" level=info msg="RemovePodSandbox \"b890a711deab43d57276d2b356b0fa67ba083810796cf4d70bf1994c4f583a29\" returns successfully" Sep 4 23:51:06.800816 containerd[1493]: time="2025-09-04T23:51:06.800790336Z" level=info msg="StopPodSandbox for \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\"" Sep 4 23:51:06.800974 containerd[1493]: time="2025-09-04T23:51:06.800933849Z" level=info msg="TearDown network for sandbox \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\" successfully" Sep 4 23:51:06.800974 containerd[1493]: time="2025-09-04T23:51:06.800954999Z" level=info msg="StopPodSandbox for \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\" returns successfully" Sep 4 23:51:06.801441 containerd[1493]: time="2025-09-04T23:51:06.801391291Z" level=info msg="RemovePodSandbox for \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\"" Sep 4 23:51:06.801441 containerd[1493]: time="2025-09-04T23:51:06.801424034Z" level=info msg="Forcibly stopping sandbox \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\"" Sep 4 23:51:06.801595 containerd[1493]: time="2025-09-04T23:51:06.801522812Z" level=info msg="TearDown network for sandbox \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\" successfully" Sep 4 23:51:07.074117 containerd[1493]: time="2025-09-04T23:51:07.073719938Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.074117 containerd[1493]: time="2025-09-04T23:51:07.073862871Z" level=info msg="RemovePodSandbox \"08d0aa4233e7e82b13a92c38c9b254d537f40e723d8699dc83525d746dfb07c0\" returns successfully" Sep 4 23:51:07.074984 containerd[1493]: time="2025-09-04T23:51:07.074790388Z" level=info msg="StopPodSandbox for \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\"" Sep 4 23:51:07.074984 containerd[1493]: time="2025-09-04T23:51:07.074952246Z" level=info msg="TearDown network for sandbox \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\" successfully" Sep 4 23:51:07.074984 containerd[1493]: time="2025-09-04T23:51:07.074968608Z" level=info msg="StopPodSandbox for \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\" returns successfully" Sep 4 23:51:07.075711 containerd[1493]: time="2025-09-04T23:51:07.075656799Z" level=info msg="RemovePodSandbox for \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\"" Sep 4 23:51:07.075777 containerd[1493]: time="2025-09-04T23:51:07.075716283Z" level=info msg="Forcibly stopping sandbox \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\"" Sep 4 23:51:07.075946 containerd[1493]: time="2025-09-04T23:51:07.075871659Z" level=info msg="TearDown network for sandbox \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\" successfully" Sep 4 23:51:07.114676 containerd[1493]: time="2025-09-04T23:51:07.114612832Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.115093 containerd[1493]: time="2025-09-04T23:51:07.114708364Z" level=info msg="RemovePodSandbox \"ea47a482b9740adc46972c4b01e07b784f9b64a62e1f6146d62a71602e865561\" returns successfully" Sep 4 23:51:07.115343 containerd[1493]: time="2025-09-04T23:51:07.115308017Z" level=info msg="StopPodSandbox for \"d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464\"" Sep 4 23:51:07.115494 containerd[1493]: time="2025-09-04T23:51:07.115470426Z" level=info msg="TearDown network for sandbox \"d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464\" successfully" Sep 4 23:51:07.115571 containerd[1493]: time="2025-09-04T23:51:07.115488901Z" level=info msg="StopPodSandbox for \"d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464\" returns successfully" Sep 4 23:51:07.118159 containerd[1493]: time="2025-09-04T23:51:07.115844559Z" level=info msg="RemovePodSandbox for \"d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464\"" Sep 4 23:51:07.118159 containerd[1493]: time="2025-09-04T23:51:07.115878684Z" level=info msg="Forcibly stopping sandbox \"d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464\"" Sep 4 23:51:07.118159 containerd[1493]: time="2025-09-04T23:51:07.115972783Z" level=info msg="TearDown network for sandbox \"d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464\" successfully" Sep 4 23:51:07.145490 containerd[1493]: time="2025-09-04T23:51:07.145381331Z" level=info msg="CreateContainer within sandbox \"c96b66706214f1e86ed4ca345627232779c2603906bd36248936099685dac8ef\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ed01dc68914e595ad7c11605f2cc51a85390c6fff38105063082494994c80b2a\"" Sep 4 23:51:07.146449 containerd[1493]: time="2025-09-04T23:51:07.146298910Z" level=info msg="StartContainer for \"ed01dc68914e595ad7c11605f2cc51a85390c6fff38105063082494994c80b2a\"" Sep 4 23:51:07.159438 containerd[1493]: time="2025-09-04T23:51:07.159342568Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.159802 containerd[1493]: time="2025-09-04T23:51:07.159753672Z" level=info msg="RemovePodSandbox \"d5adc59b2da9ee6796daba5d17922706b70209beebf95dad7632242dcf1db464\" returns successfully" Sep 4 23:51:07.160738 containerd[1493]: time="2025-09-04T23:51:07.160688664Z" level=info msg="StopPodSandbox for \"3ca6d1079abfb546e2e4596540954c2f5b262e075cce535e55717df01a347d26\"" Sep 4 23:51:07.160839 containerd[1493]: time="2025-09-04T23:51:07.160812569Z" level=info msg="TearDown network for sandbox \"3ca6d1079abfb546e2e4596540954c2f5b262e075cce535e55717df01a347d26\" successfully" Sep 4 23:51:07.160892 containerd[1493]: time="2025-09-04T23:51:07.160835563Z" level=info msg="StopPodSandbox for \"3ca6d1079abfb546e2e4596540954c2f5b262e075cce535e55717df01a347d26\" returns successfully" Sep 4 23:51:07.163586 containerd[1493]: time="2025-09-04T23:51:07.163515421Z" level=info msg="RemovePodSandbox for \"3ca6d1079abfb546e2e4596540954c2f5b262e075cce535e55717df01a347d26\"" Sep 4 23:51:07.163672 containerd[1493]: time="2025-09-04T23:51:07.163594832Z" level=info msg="Forcibly stopping sandbox \"3ca6d1079abfb546e2e4596540954c2f5b262e075cce535e55717df01a347d26\"" Sep 4 23:51:07.165907 containerd[1493]: time="2025-09-04T23:51:07.163739408Z" level=info msg="TearDown network for sandbox \"3ca6d1079abfb546e2e4596540954c2f5b262e075cce535e55717df01a347d26\" successfully" Sep 4 23:51:07.200059 systemd[1]: run-containerd-runc-k8s.io-ed01dc68914e595ad7c11605f2cc51a85390c6fff38105063082494994c80b2a-runc.6ebncE.mount: Deactivated successfully. Sep 4 23:51:07.203052 containerd[1493]: time="2025-09-04T23:51:07.202773238Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ca6d1079abfb546e2e4596540954c2f5b262e075cce535e55717df01a347d26\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.203052 containerd[1493]: time="2025-09-04T23:51:07.202866105Z" level=info msg="RemovePodSandbox \"3ca6d1079abfb546e2e4596540954c2f5b262e075cce535e55717df01a347d26\" returns successfully" Sep 4 23:51:07.203578 containerd[1493]: time="2025-09-04T23:51:07.203542434Z" level=info msg="StopPodSandbox for \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\"" Sep 4 23:51:07.203739 containerd[1493]: time="2025-09-04T23:51:07.203698121Z" level=info msg="TearDown network for sandbox \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\" successfully" Sep 4 23:51:07.203739 containerd[1493]: time="2025-09-04T23:51:07.203718380Z" level=info msg="StopPodSandbox for \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\" returns successfully" Sep 4 23:51:07.204210 containerd[1493]: time="2025-09-04T23:51:07.204182414Z" level=info msg="RemovePodSandbox for \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\"" Sep 4 23:51:07.204249 containerd[1493]: time="2025-09-04T23:51:07.204220677Z" level=info msg="Forcibly stopping sandbox \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\"" Sep 4 23:51:07.204400 containerd[1493]: time="2025-09-04T23:51:07.204330958Z" level=info msg="TearDown network for sandbox \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\" successfully" Sep 4 23:51:07.217319 systemd[1]: Started cri-containerd-ed01dc68914e595ad7c11605f2cc51a85390c6fff38105063082494994c80b2a.scope - libcontainer container ed01dc68914e595ad7c11605f2cc51a85390c6fff38105063082494994c80b2a. Sep 4 23:51:07.256470 containerd[1493]: time="2025-09-04T23:51:07.256272468Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.256470 containerd[1493]: time="2025-09-04T23:51:07.256414338Z" level=info msg="RemovePodSandbox \"cf3d61e51dbab73697b05e607574d820d36c271b0a08ded9478832df8e51601a\" returns successfully" Sep 4 23:51:07.257820 containerd[1493]: time="2025-09-04T23:51:07.257644112Z" level=info msg="StopPodSandbox for \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\"" Sep 4 23:51:07.258255 containerd[1493]: time="2025-09-04T23:51:07.258086205Z" level=info msg="TearDown network for sandbox \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\" successfully" Sep 4 23:51:07.258255 containerd[1493]: time="2025-09-04T23:51:07.258105281Z" level=info msg="StopPodSandbox for \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\" returns successfully" Sep 4 23:51:07.258940 containerd[1493]: time="2025-09-04T23:51:07.258915264Z" level=info msg="RemovePodSandbox for \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\"" Sep 4 23:51:07.259065 containerd[1493]: time="2025-09-04T23:51:07.259046234Z" level=info msg="Forcibly stopping sandbox \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\"" Sep 4 23:51:07.259525 containerd[1493]: time="2025-09-04T23:51:07.259211128Z" level=info msg="TearDown network for sandbox \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\" successfully" Sep 4 23:51:07.327675 containerd[1493]: time="2025-09-04T23:51:07.327028485Z" level=info msg="StartContainer for \"ed01dc68914e595ad7c11605f2cc51a85390c6fff38105063082494994c80b2a\" returns successfully" Sep 4 23:51:07.386513 containerd[1493]: time="2025-09-04T23:51:07.385670204Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.386513 containerd[1493]: time="2025-09-04T23:51:07.385797868Z" level=info msg="RemovePodSandbox \"3c8d43c69250f6fdeb27d1bfc412ce1468d096156bc4414d29ab9d24a4dde9de\" returns successfully" Sep 4 23:51:07.386513 containerd[1493]: time="2025-09-04T23:51:07.386529371Z" level=info msg="StopPodSandbox for \"9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d\"" Sep 4 23:51:07.388030 containerd[1493]: time="2025-09-04T23:51:07.386721658Z" level=info msg="TearDown network for sandbox \"9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d\" successfully" Sep 4 23:51:07.388030 containerd[1493]: time="2025-09-04T23:51:07.386739642Z" level=info msg="StopPodSandbox for \"9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d\" returns successfully" Sep 4 23:51:07.388030 containerd[1493]: time="2025-09-04T23:51:07.387376846Z" level=info msg="RemovePodSandbox for \"9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d\"" Sep 4 23:51:07.388030 containerd[1493]: time="2025-09-04T23:51:07.387401894Z" level=info msg="Forcibly stopping sandbox \"9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d\"" Sep 4 23:51:07.388030 containerd[1493]: time="2025-09-04T23:51:07.387487938Z" level=info msg="TearDown network for sandbox \"9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d\" successfully" Sep 4 23:51:07.484209 containerd[1493]: time="2025-09-04T23:51:07.484078351Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.484474 containerd[1493]: time="2025-09-04T23:51:07.484230371Z" level=info msg="RemovePodSandbox \"9eacfad67a14b371d5fe7180ea32d4d6f71d921b4e2f400d7ce19bc86f4a148d\" returns successfully" Sep 4 23:51:07.485051 containerd[1493]: time="2025-09-04T23:51:07.485026167Z" level=info msg="StopPodSandbox for \"221a336af55be2c9646604fea5973e52aef6c0d002c49890c63709382cb83bfc\"" Sep 4 23:51:07.485261 containerd[1493]: time="2025-09-04T23:51:07.485205750Z" level=info msg="TearDown network for sandbox \"221a336af55be2c9646604fea5973e52aef6c0d002c49890c63709382cb83bfc\" successfully" Sep 4 23:51:07.485261 containerd[1493]: time="2025-09-04T23:51:07.485222100Z" level=info msg="StopPodSandbox for \"221a336af55be2c9646604fea5973e52aef6c0d002c49890c63709382cb83bfc\" returns successfully" Sep 4 23:51:07.485720 containerd[1493]: time="2025-09-04T23:51:07.485697817Z" level=info msg="RemovePodSandbox for \"221a336af55be2c9646604fea5973e52aef6c0d002c49890c63709382cb83bfc\"" Sep 4 23:51:07.485796 containerd[1493]: time="2025-09-04T23:51:07.485727334Z" level=info msg="Forcibly stopping sandbox \"221a336af55be2c9646604fea5973e52aef6c0d002c49890c63709382cb83bfc\"" Sep 4 23:51:07.485896 containerd[1493]: time="2025-09-04T23:51:07.485834948Z" level=info msg="TearDown network for sandbox \"221a336af55be2c9646604fea5973e52aef6c0d002c49890c63709382cb83bfc\" successfully" Sep 4 23:51:07.543056 containerd[1493]: time="2025-09-04T23:51:07.542871579Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"221a336af55be2c9646604fea5973e52aef6c0d002c49890c63709382cb83bfc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.543056 containerd[1493]: time="2025-09-04T23:51:07.543016626Z" level=info msg="RemovePodSandbox \"221a336af55be2c9646604fea5973e52aef6c0d002c49890c63709382cb83bfc\" returns successfully" Sep 4 23:51:07.548899 containerd[1493]: time="2025-09-04T23:51:07.548796851Z" level=info msg="StopPodSandbox for \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\"" Sep 4 23:51:07.549169 containerd[1493]: time="2025-09-04T23:51:07.548999528Z" level=info msg="TearDown network for sandbox \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\" successfully" Sep 4 23:51:07.549169 containerd[1493]: time="2025-09-04T23:51:07.549059011Z" level=info msg="StopPodSandbox for \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\" returns successfully" Sep 4 23:51:07.549946 containerd[1493]: time="2025-09-04T23:51:07.549696897Z" level=info msg="RemovePodSandbox for \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\"" Sep 4 23:51:07.549946 containerd[1493]: time="2025-09-04T23:51:07.549738376Z" level=info msg="Forcibly stopping sandbox \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\"" Sep 4 23:51:07.549946 containerd[1493]: time="2025-09-04T23:51:07.549827215Z" level=info msg="TearDown network for sandbox \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\" successfully" Sep 4 23:51:07.569833 containerd[1493]: time="2025-09-04T23:51:07.569728413Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.570092 containerd[1493]: time="2025-09-04T23:51:07.569880222Z" level=info msg="RemovePodSandbox \"39ef0349de02c351a8edc14251c8d1806b575b47977717180312bd4544edc56c\" returns successfully" Sep 4 23:51:07.570727 containerd[1493]: time="2025-09-04T23:51:07.570682672Z" level=info msg="StopPodSandbox for \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\"" Sep 4 23:51:07.571024 containerd[1493]: time="2025-09-04T23:51:07.570842887Z" level=info msg="TearDown network for sandbox \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\" successfully" Sep 4 23:51:07.571024 containerd[1493]: time="2025-09-04T23:51:07.570861693Z" level=info msg="StopPodSandbox for \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\" returns successfully" Sep 4 23:51:07.571701 containerd[1493]: time="2025-09-04T23:51:07.571639826Z" level=info msg="RemovePodSandbox for \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\"" Sep 4 23:51:07.571701 containerd[1493]: time="2025-09-04T23:51:07.571700491Z" level=info msg="Forcibly stopping sandbox \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\"" Sep 4 23:51:07.571980 containerd[1493]: time="2025-09-04T23:51:07.571835659Z" level=info msg="TearDown network for sandbox \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\" successfully" Sep 4 23:51:07.597607 containerd[1493]: time="2025-09-04T23:51:07.597311461Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.597607 containerd[1493]: time="2025-09-04T23:51:07.597471395Z" level=info msg="RemovePodSandbox \"4cea1eac11abdbd6ed0820aab63a56f20318c266b2c88e713259fbe416cc43d8\" returns successfully" Sep 4 23:51:07.598264 containerd[1493]: time="2025-09-04T23:51:07.598214061Z" level=info msg="StopPodSandbox for \"fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc\"" Sep 4 23:51:07.598435 containerd[1493]: time="2025-09-04T23:51:07.598380479Z" level=info msg="TearDown network for sandbox \"fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc\" successfully" Sep 4 23:51:07.598435 containerd[1493]: time="2025-09-04T23:51:07.598405786Z" level=info msg="StopPodSandbox for \"fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc\" returns successfully" Sep 4 23:51:07.598908 containerd[1493]: time="2025-09-04T23:51:07.598858709Z" level=info msg="RemovePodSandbox for \"fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc\"" Sep 4 23:51:07.599198 containerd[1493]: time="2025-09-04T23:51:07.598908024Z" level=info msg="Forcibly stopping sandbox \"fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc\"" Sep 4 23:51:07.599198 containerd[1493]: time="2025-09-04T23:51:07.599007073Z" level=info msg="TearDown network for sandbox \"fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc\" successfully" Sep 4 23:51:07.607059 containerd[1493]: time="2025-09-04T23:51:07.606855831Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.607436 containerd[1493]: time="2025-09-04T23:51:07.607232348Z" level=info msg="RemovePodSandbox \"fda486c434a2ad8a7cc875ea0836e9490378befcb3dd92132d12f2f2e34904fc\" returns successfully" Sep 4 23:51:07.608499 containerd[1493]: time="2025-09-04T23:51:07.608391768Z" level=info msg="StopPodSandbox for \"c3519b41cb213537cd96c2cdfd112aa2c9f1112504a83a8b42f3378da4e2d339\"" Sep 4 23:51:07.608791 containerd[1493]: time="2025-09-04T23:51:07.608638177Z" level=info msg="TearDown network for sandbox \"c3519b41cb213537cd96c2cdfd112aa2c9f1112504a83a8b42f3378da4e2d339\" successfully" Sep 4 23:51:07.608791 containerd[1493]: time="2025-09-04T23:51:07.608655651Z" level=info msg="StopPodSandbox for \"c3519b41cb213537cd96c2cdfd112aa2c9f1112504a83a8b42f3378da4e2d339\" returns successfully" Sep 4 23:51:07.609482 containerd[1493]: time="2025-09-04T23:51:07.609445145Z" level=info msg="RemovePodSandbox for \"c3519b41cb213537cd96c2cdfd112aa2c9f1112504a83a8b42f3378da4e2d339\"" Sep 4 23:51:07.609551 containerd[1493]: time="2025-09-04T23:51:07.609496193Z" level=info msg="Forcibly stopping sandbox \"c3519b41cb213537cd96c2cdfd112aa2c9f1112504a83a8b42f3378da4e2d339\"" Sep 4 23:51:07.609912 containerd[1493]: time="2025-09-04T23:51:07.609644535Z" level=info msg="TearDown network for sandbox \"c3519b41cb213537cd96c2cdfd112aa2c9f1112504a83a8b42f3378da4e2d339\" successfully" Sep 4 23:51:07.616410 containerd[1493]: time="2025-09-04T23:51:07.616302444Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3519b41cb213537cd96c2cdfd112aa2c9f1112504a83a8b42f3378da4e2d339\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.616410 containerd[1493]: time="2025-09-04T23:51:07.616399529Z" level=info msg="RemovePodSandbox \"c3519b41cb213537cd96c2cdfd112aa2c9f1112504a83a8b42f3378da4e2d339\" returns successfully" Sep 4 23:51:07.617376 containerd[1493]: time="2025-09-04T23:51:07.617067261Z" level=info msg="StopPodSandbox for \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\"" Sep 4 23:51:07.617376 containerd[1493]: time="2025-09-04T23:51:07.617250411Z" level=info msg="TearDown network for sandbox \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\" successfully" Sep 4 23:51:07.617376 containerd[1493]: time="2025-09-04T23:51:07.617275739Z" level=info msg="StopPodSandbox for \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\" returns successfully" Sep 4 23:51:07.617895 containerd[1493]: time="2025-09-04T23:51:07.617815487Z" level=info msg="RemovePodSandbox for \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\"" Sep 4 23:51:07.617895 containerd[1493]: time="2025-09-04T23:51:07.617872436Z" level=info msg="Forcibly stopping sandbox \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\"" Sep 4 23:51:07.618069 containerd[1493]: time="2025-09-04T23:51:07.618012342Z" level=info msg="TearDown network for sandbox \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\" successfully" Sep 4 23:51:07.629699 containerd[1493]: time="2025-09-04T23:51:07.629533169Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.629915 containerd[1493]: time="2025-09-04T23:51:07.629799977Z" level=info msg="RemovePodSandbox \"caac593378fd727a23804927945f1513a640f7b3269a7d842d5287140fa342b8\" returns successfully" Sep 4 23:51:07.632326 containerd[1493]: time="2025-09-04T23:51:07.632283331Z" level=info msg="StopPodSandbox for \"ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83\"" Sep 4 23:51:07.632693 containerd[1493]: time="2025-09-04T23:51:07.632475597Z" level=info msg="TearDown network for sandbox \"ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83\" successfully" Sep 4 23:51:07.632693 containerd[1493]: time="2025-09-04T23:51:07.632501286Z" level=info msg="StopPodSandbox for \"ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83\" returns successfully" Sep 4 23:51:07.633497 containerd[1493]: time="2025-09-04T23:51:07.633464251Z" level=info msg="RemovePodSandbox for \"ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83\"" Sep 4 23:51:07.633670 containerd[1493]: time="2025-09-04T23:51:07.633503265Z" level=info msg="Forcibly stopping sandbox \"ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83\"" Sep 4 23:51:07.633670 containerd[1493]: time="2025-09-04T23:51:07.633625027Z" level=info msg="TearDown network for sandbox \"ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83\" successfully" Sep 4 23:51:07.644688 containerd[1493]: time="2025-09-04T23:51:07.643943132Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.644688 containerd[1493]: time="2025-09-04T23:51:07.644171567Z" level=info msg="RemovePodSandbox \"ab929939d730a0b46e6b81237dbd1cbf6b24b966e6a577548d8e3bb57c6cac83\" returns successfully" Sep 4 23:51:07.645028 containerd[1493]: time="2025-09-04T23:51:07.644775347Z" level=info msg="StopPodSandbox for \"eec079f8379a15ab7a87fe4181f231efd532b0b64310b84f4ee6dd830ac96453\"" Sep 4 23:51:07.645028 containerd[1493]: time="2025-09-04T23:51:07.644929932Z" level=info msg="TearDown network for sandbox \"eec079f8379a15ab7a87fe4181f231efd532b0b64310b84f4ee6dd830ac96453\" successfully" Sep 4 23:51:07.645028 containerd[1493]: time="2025-09-04T23:51:07.644946595Z" level=info msg="StopPodSandbox for \"eec079f8379a15ab7a87fe4181f231efd532b0b64310b84f4ee6dd830ac96453\" returns successfully" Sep 4 23:51:07.645421 containerd[1493]: time="2025-09-04T23:51:07.645365102Z" level=info msg="RemovePodSandbox for \"eec079f8379a15ab7a87fe4181f231efd532b0b64310b84f4ee6dd830ac96453\"" Sep 4 23:51:07.645421 containerd[1493]: time="2025-09-04T23:51:07.645401602Z" level=info msg="Forcibly stopping sandbox \"eec079f8379a15ab7a87fe4181f231efd532b0b64310b84f4ee6dd830ac96453\"" Sep 4 23:51:07.645681 containerd[1493]: time="2025-09-04T23:51:07.645501973Z" level=info msg="TearDown network for sandbox \"eec079f8379a15ab7a87fe4181f231efd532b0b64310b84f4ee6dd830ac96453\" successfully" Sep 4 23:51:07.654081 containerd[1493]: time="2025-09-04T23:51:07.653971544Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eec079f8379a15ab7a87fe4181f231efd532b0b64310b84f4ee6dd830ac96453\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.654081 containerd[1493]: time="2025-09-04T23:51:07.654080712Z" level=info msg="RemovePodSandbox \"eec079f8379a15ab7a87fe4181f231efd532b0b64310b84f4ee6dd830ac96453\" returns successfully" Sep 4 23:51:07.654886 containerd[1493]: time="2025-09-04T23:51:07.654825752Z" level=info msg="StopPodSandbox for \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\"" Sep 4 23:51:07.655110 containerd[1493]: time="2025-09-04T23:51:07.655003791Z" level=info msg="TearDown network for sandbox \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\" successfully" Sep 4 23:51:07.655110 containerd[1493]: time="2025-09-04T23:51:07.655025894Z" level=info msg="StopPodSandbox for \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\" returns successfully" Sep 4 23:51:07.655654 containerd[1493]: time="2025-09-04T23:51:07.655618523Z" level=info msg="RemovePodSandbox for \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\"" Sep 4 23:51:07.655726 containerd[1493]: time="2025-09-04T23:51:07.655656395Z" level=info msg="Forcibly stopping sandbox \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\"" Sep 4 23:51:07.655857 containerd[1493]: time="2025-09-04T23:51:07.655799107Z" level=info msg="TearDown network for sandbox \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\" successfully" Sep 4 23:51:07.664994 containerd[1493]: time="2025-09-04T23:51:07.664885714Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.664994 containerd[1493]: time="2025-09-04T23:51:07.664995523Z" level=info msg="RemovePodSandbox \"fbe166c68135220dce60aee3ba42a8db81f7284dc62f9d6b31b29311e0b3e94b\" returns successfully" Sep 4 23:51:07.670055 containerd[1493]: time="2025-09-04T23:51:07.669956308Z" level=info msg="StopPodSandbox for \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\"" Sep 4 23:51:07.670279 containerd[1493]: time="2025-09-04T23:51:07.670207947Z" level=info msg="TearDown network for sandbox \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\" successfully" Sep 4 23:51:07.670279 containerd[1493]: time="2025-09-04T23:51:07.670227124Z" level=info msg="StopPodSandbox for \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\" returns successfully" Sep 4 23:51:07.675160 containerd[1493]: time="2025-09-04T23:51:07.672913764Z" level=info msg="RemovePodSandbox for \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\"" Sep 4 23:51:07.675160 containerd[1493]: time="2025-09-04T23:51:07.672977085Z" level=info msg="Forcibly stopping sandbox \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\"" Sep 4 23:51:07.675160 containerd[1493]: time="2025-09-04T23:51:07.673114767Z" level=info msg="TearDown network for sandbox \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\" successfully" Sep 4 23:51:07.685735 containerd[1493]: time="2025-09-04T23:51:07.685641742Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.685962 containerd[1493]: time="2025-09-04T23:51:07.685855088Z" level=info msg="RemovePodSandbox \"d1cd846a62b75039ca5ab2dd537d3692384135003c3c88187fe08d917942b81e\" returns successfully" Sep 4 23:51:07.687157 containerd[1493]: time="2025-09-04T23:51:07.687025819Z" level=info msg="StopPodSandbox for \"f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814\"" Sep 4 23:51:07.687381 containerd[1493]: time="2025-09-04T23:51:07.687293670Z" level=info msg="TearDown network for sandbox \"f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814\" successfully" Sep 4 23:51:07.687381 containerd[1493]: time="2025-09-04T23:51:07.687361028Z" level=info msg="StopPodSandbox for \"f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814\" returns successfully" Sep 4 23:51:07.687719 containerd[1493]: time="2025-09-04T23:51:07.687689664Z" level=info msg="RemovePodSandbox for \"f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814\"" Sep 4 23:51:07.687719 containerd[1493]: time="2025-09-04T23:51:07.687717347Z" level=info msg="Forcibly stopping sandbox \"f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814\"" Sep 4 23:51:07.687837 containerd[1493]: time="2025-09-04T23:51:07.687796548Z" level=info msg="TearDown network for sandbox \"f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814\" successfully" Sep 4 23:51:07.696859 containerd[1493]: time="2025-09-04T23:51:07.696754049Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.697140 containerd[1493]: time="2025-09-04T23:51:07.696889107Z" level=info msg="RemovePodSandbox \"f6470cd8a98d2ec4d6f383a6450a22de8c94c6336160825551a22c3b1661d814\" returns successfully" Sep 4 23:51:07.697816 containerd[1493]: time="2025-09-04T23:51:07.697681767Z" level=info msg="StopPodSandbox for \"88e832c82497569883a33eb8c2fa7ca6e301915436548a04ba2f8ccf06500f70\"" Sep 4 23:51:07.698180 containerd[1493]: time="2025-09-04T23:51:07.697889584Z" level=info msg="TearDown network for sandbox \"88e832c82497569883a33eb8c2fa7ca6e301915436548a04ba2f8ccf06500f70\" successfully" Sep 4 23:51:07.698180 containerd[1493]: time="2025-09-04T23:51:07.697909210Z" level=info msg="StopPodSandbox for \"88e832c82497569883a33eb8c2fa7ca6e301915436548a04ba2f8ccf06500f70\" returns successfully" Sep 4 23:51:07.698705 containerd[1493]: time="2025-09-04T23:51:07.698454139Z" level=info msg="RemovePodSandbox for \"88e832c82497569883a33eb8c2fa7ca6e301915436548a04ba2f8ccf06500f70\"" Sep 4 23:51:07.698705 containerd[1493]: time="2025-09-04T23:51:07.698686081Z" level=info msg="Forcibly stopping sandbox \"88e832c82497569883a33eb8c2fa7ca6e301915436548a04ba2f8ccf06500f70\"" Sep 4 23:51:07.698952 containerd[1493]: time="2025-09-04T23:51:07.698781904Z" level=info msg="TearDown network for sandbox \"88e832c82497569883a33eb8c2fa7ca6e301915436548a04ba2f8ccf06500f70\" successfully" Sep 4 23:51:07.713070 containerd[1493]: time="2025-09-04T23:51:07.712984180Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88e832c82497569883a33eb8c2fa7ca6e301915436548a04ba2f8ccf06500f70\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.713425 containerd[1493]: time="2025-09-04T23:51:07.713366549Z" level=info msg="RemovePodSandbox \"88e832c82497569883a33eb8c2fa7ca6e301915436548a04ba2f8ccf06500f70\" returns successfully" Sep 4 23:51:07.715175 containerd[1493]: time="2025-09-04T23:51:07.715058143Z" level=info msg="StopPodSandbox for \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\"" Sep 4 23:51:07.716191 containerd[1493]: time="2025-09-04T23:51:07.715256812Z" level=info msg="TearDown network for sandbox \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\" successfully" Sep 4 23:51:07.716191 containerd[1493]: time="2025-09-04T23:51:07.715275637Z" level=info msg="StopPodSandbox for \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\" returns successfully" Sep 4 23:51:07.716409 containerd[1493]: time="2025-09-04T23:51:07.716367158Z" level=info msg="RemovePodSandbox for \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\"" Sep 4 23:51:07.716409 containerd[1493]: time="2025-09-04T23:51:07.716402555Z" level=info msg="Forcibly stopping sandbox \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\"" Sep 4 23:51:07.716573 containerd[1493]: time="2025-09-04T23:51:07.716501985Z" level=info msg="TearDown network for sandbox \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\" successfully" Sep 4 23:51:07.749721 containerd[1493]: time="2025-09-04T23:51:07.746909475Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.749721 containerd[1493]: time="2025-09-04T23:51:07.747057968Z" level=info msg="RemovePodSandbox \"3db9cbba8accbb33f823ef0c73e109d80d114f585a1489ca55a6725931d9b82c\" returns successfully" Sep 4 23:51:07.749721 containerd[1493]: time="2025-09-04T23:51:07.747911184Z" level=info msg="StopPodSandbox for \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\"" Sep 4 23:51:07.749721 containerd[1493]: time="2025-09-04T23:51:07.748074125Z" level=info msg="TearDown network for sandbox \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\" successfully" Sep 4 23:51:07.749721 containerd[1493]: time="2025-09-04T23:51:07.748088553Z" level=info msg="StopPodSandbox for \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\" returns successfully" Sep 4 23:51:07.749721 containerd[1493]: time="2025-09-04T23:51:07.748575471Z" level=info msg="RemovePodSandbox for \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\"" Sep 4 23:51:07.749721 containerd[1493]: time="2025-09-04T23:51:07.748600348Z" level=info msg="Forcibly stopping sandbox \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\"" Sep 4 23:51:07.749721 containerd[1493]: time="2025-09-04T23:51:07.748686933Z" level=info msg="TearDown network for sandbox \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\" successfully" Sep 4 23:51:07.758171 containerd[1493]: time="2025-09-04T23:51:07.758057271Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.758395 containerd[1493]: time="2025-09-04T23:51:07.758187630Z" level=info msg="RemovePodSandbox \"8c65d5c44ef18a2cce9051c8191166146fede7711b5e88febf2ecee9ab673144\" returns successfully" Sep 4 23:51:07.759410 containerd[1493]: time="2025-09-04T23:51:07.759307994Z" level=info msg="StopPodSandbox for \"29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7\"" Sep 4 23:51:07.759524 containerd[1493]: time="2025-09-04T23:51:07.759453462Z" level=info msg="TearDown network for sandbox \"29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7\" successfully" Sep 4 23:51:07.759524 containerd[1493]: time="2025-09-04T23:51:07.759519017Z" level=info msg="StopPodSandbox for \"29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7\" returns successfully" Sep 4 23:51:07.759903 containerd[1493]: time="2025-09-04T23:51:07.759859105Z" level=info msg="RemovePodSandbox for \"29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7\"" Sep 4 23:51:07.759903 containerd[1493]: time="2025-09-04T23:51:07.759889713Z" level=info msg="Forcibly stopping sandbox \"29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7\"" Sep 4 23:51:07.760220 containerd[1493]: time="2025-09-04T23:51:07.759981989Z" level=info msg="TearDown network for sandbox \"29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7\" successfully" Sep 4 23:51:07.767048 containerd[1493]: time="2025-09-04T23:51:07.766951842Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.767048 containerd[1493]: time="2025-09-04T23:51:07.767065990Z" level=info msg="RemovePodSandbox \"29cbd60ddf1f97e47043f2ea4d917fb4d93ef26ef870314537b001defeb6ccb7\" returns successfully" Sep 4 23:51:07.767910 containerd[1493]: time="2025-09-04T23:51:07.767653480Z" level=info msg="StopPodSandbox for \"5ca88b11a193e5a719182285f587d742d5955a553f51361293437539de326f44\"" Sep 4 23:51:07.767910 containerd[1493]: time="2025-09-04T23:51:07.767816040Z" level=info msg="TearDown network for sandbox \"5ca88b11a193e5a719182285f587d742d5955a553f51361293437539de326f44\" successfully" Sep 4 23:51:07.767910 containerd[1493]: time="2025-09-04T23:51:07.767833552Z" level=info msg="StopPodSandbox for \"5ca88b11a193e5a719182285f587d742d5955a553f51361293437539de326f44\" returns successfully" Sep 4 23:51:07.768246 containerd[1493]: time="2025-09-04T23:51:07.768217745Z" level=info msg="RemovePodSandbox for \"5ca88b11a193e5a719182285f587d742d5955a553f51361293437539de326f44\"" Sep 4 23:51:07.768382 containerd[1493]: time="2025-09-04T23:51:07.768248684Z" level=info msg="Forcibly stopping sandbox \"5ca88b11a193e5a719182285f587d742d5955a553f51361293437539de326f44\"" Sep 4 23:51:07.768382 containerd[1493]: time="2025-09-04T23:51:07.768340749Z" level=info msg="TearDown network for sandbox \"5ca88b11a193e5a719182285f587d742d5955a553f51361293437539de326f44\" successfully" Sep 4 23:51:07.775544 containerd[1493]: time="2025-09-04T23:51:07.775468143Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ca88b11a193e5a719182285f587d742d5955a553f51361293437539de326f44\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:51:07.775721 containerd[1493]: time="2025-09-04T23:51:07.775571460Z" level=info msg="RemovePodSandbox \"5ca88b11a193e5a719182285f587d742d5955a553f51361293437539de326f44\" returns successfully" Sep 4 23:51:08.225734 kubelet[2748]: I0904 23:51:08.225646 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78c45fc5f4-726pc" podStartSLOduration=75.584472363 podStartE2EDuration="1m31.225623631s" podCreationTimestamp="2025-09-04 23:49:37 +0000 UTC" firstStartedPulling="2025-09-04 23:50:50.785337156 +0000 UTC m=+105.730421060" lastFinishedPulling="2025-09-04 23:51:06.426488424 +0000 UTC m=+121.371572328" observedRunningTime="2025-09-04 23:51:08.224921093 +0000 UTC m=+123.170005017" watchObservedRunningTime="2025-09-04 23:51:08.225623631 +0000 UTC m=+123.170707535" Sep 4 23:51:09.111789 containerd[1493]: time="2025-09-04T23:51:09.111682864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:09.238061 containerd[1493]: time="2025-09-04T23:51:09.237958925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 4 23:51:09.343693 containerd[1493]: time="2025-09-04T23:51:09.343613799Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:09.420600 containerd[1493]: time="2025-09-04T23:51:09.420497898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:09.421443 containerd[1493]: time="2025-09-04T23:51:09.421399917Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.994716241s" Sep 4 23:51:09.421491 containerd[1493]: time="2025-09-04T23:51:09.421450864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 4 23:51:09.422879 containerd[1493]: time="2025-09-04T23:51:09.422827377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 4 23:51:09.547475 containerd[1493]: time="2025-09-04T23:51:09.547399922Z" level=info msg="CreateContainer within sandbox \"c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 23:51:09.621103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2790646109.mount: Deactivated successfully. Sep 4 23:51:09.634684 containerd[1493]: time="2025-09-04T23:51:09.634224330Z" level=info msg="CreateContainer within sandbox \"c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"93b13d493340cc71af041ede00672228c3ad1a229f7794f4624ef20101c202b5\"" Sep 4 23:51:09.636507 containerd[1493]: time="2025-09-04T23:51:09.635272177Z" level=info msg="StartContainer for \"93b13d493340cc71af041ede00672228c3ad1a229f7794f4624ef20101c202b5\"" Sep 4 23:51:09.691729 systemd[1]: Started cri-containerd-93b13d493340cc71af041ede00672228c3ad1a229f7794f4624ef20101c202b5.scope - libcontainer container 93b13d493340cc71af041ede00672228c3ad1a229f7794f4624ef20101c202b5. Sep 4 23:51:09.758367 containerd[1493]: time="2025-09-04T23:51:09.758294308Z" level=info msg="StartContainer for \"93b13d493340cc71af041ede00672228c3ad1a229f7794f4624ef20101c202b5\" returns successfully" Sep 4 23:51:11.130369 systemd[1]: Started sshd@17-10.0.0.62:22-10.0.0.1:57126.service - OpenSSH per-connection server daemon (10.0.0.1:57126). Sep 4 23:51:11.204595 sshd[5954]: Accepted publickey for core from 10.0.0.1 port 57126 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:51:11.206949 sshd-session[5954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:11.212673 systemd-logind[1475]: New session 18 of user core. Sep 4 23:51:11.219259 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 23:51:11.891520 sshd[5956]: Connection closed by 10.0.0.1 port 57126 Sep 4 23:51:11.891916 sshd-session[5954]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:11.896367 systemd[1]: sshd@17-10.0.0.62:22-10.0.0.1:57126.service: Deactivated successfully. Sep 4 23:51:11.899822 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 23:51:11.901879 systemd-logind[1475]: Session 18 logged out. Waiting for processes to exit. Sep 4 23:51:11.903350 systemd-logind[1475]: Removed session 18. Sep 4 23:51:16.904887 systemd[1]: Started sshd@18-10.0.0.62:22-10.0.0.1:57130.service - OpenSSH per-connection server daemon (10.0.0.1:57130). Sep 4 23:51:16.983165 sshd[6030]: Accepted publickey for core from 10.0.0.1 port 57130 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:51:16.984673 sshd-session[6030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:16.991144 systemd-logind[1475]: New session 19 of user core. Sep 4 23:51:16.997280 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 23:51:18.679077 containerd[1493]: time="2025-09-04T23:51:18.678983622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:18.684633 containerd[1493]: time="2025-09-04T23:51:18.684056644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 4 23:51:18.688364 containerd[1493]: time="2025-09-04T23:51:18.688307302Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:18.695692 containerd[1493]: time="2025-09-04T23:51:18.695580932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:18.697995 containerd[1493]: time="2025-09-04T23:51:18.697665708Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 9.274772116s" Sep 4 23:51:18.697995 containerd[1493]: time="2025-09-04T23:51:18.697990616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 4 23:51:18.699964 containerd[1493]: time="2025-09-04T23:51:18.699925498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 23:51:18.729667 containerd[1493]: time="2025-09-04T23:51:18.729567538Z" level=info msg="CreateContainer within sandbox \"421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 23:51:18.779640 containerd[1493]: time="2025-09-04T23:51:18.779049777Z" level=info msg="CreateContainer within sandbox \"421827f253155120a3cfb91c79e1d6ad45dc18c4dccf483a658e9059489820bb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1dd40b3a04daa0824b59554cadc2741248d2f1c5649b65355b6680a11f3e8087\"" Sep 4 23:51:18.781330 containerd[1493]: time="2025-09-04T23:51:18.781202803Z" level=info msg="StartContainer for \"1dd40b3a04daa0824b59554cadc2741248d2f1c5649b65355b6680a11f3e8087\"" Sep 4 23:51:18.878414 systemd[1]: Started cri-containerd-1dd40b3a04daa0824b59554cadc2741248d2f1c5649b65355b6680a11f3e8087.scope - libcontainer container 1dd40b3a04daa0824b59554cadc2741248d2f1c5649b65355b6680a11f3e8087. Sep 4 23:51:20.441571 containerd[1493]: time="2025-09-04T23:51:20.441521103Z" level=info msg="StartContainer for \"1dd40b3a04daa0824b59554cadc2741248d2f1c5649b65355b6680a11f3e8087\" returns successfully" Sep 4 23:51:20.444220 kubelet[2748]: E0904 23:51:20.443221 2748 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.293s" Sep 4 23:51:20.449386 sshd[6032]: Connection closed by 10.0.0.1 port 57130 Sep 4 23:51:20.451165 sshd-session[6030]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:20.455853 systemd[1]: sshd@18-10.0.0.62:22-10.0.0.1:57130.service: Deactivated successfully. Sep 4 23:51:20.460880 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 23:51:20.464376 systemd-logind[1475]: Session 19 logged out. Waiting for processes to exit. Sep 4 23:51:20.466105 systemd-logind[1475]: Removed session 19. Sep 4 23:51:20.505537 containerd[1493]: time="2025-09-04T23:51:20.505453584Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:20.514058 containerd[1493]: time="2025-09-04T23:51:20.512213803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 4 23:51:20.516254 containerd[1493]: time="2025-09-04T23:51:20.516195518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 1.816222639s" Sep 4 23:51:20.516254 containerd[1493]: time="2025-09-04T23:51:20.516254781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 23:51:20.520241 containerd[1493]: time="2025-09-04T23:51:20.519987611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 4 23:51:20.533369 containerd[1493]: time="2025-09-04T23:51:20.533292660Z" level=info msg="CreateContainer within sandbox \"af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 23:51:20.562269 containerd[1493]: time="2025-09-04T23:51:20.562200247Z" level=info msg="CreateContainer within sandbox \"af328e7934093992eb20537ec115a676039f118d91b477ba930ec049c18c6a94\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bbb9c83bf303abf4d3aa6f5dfe967d47604f601dd402c97962ba5c13e975f7ed\"" Sep 4 23:51:20.563106 containerd[1493]: time="2025-09-04T23:51:20.563012903Z" level=info msg="StartContainer for \"bbb9c83bf303abf4d3aa6f5dfe967d47604f601dd402c97962ba5c13e975f7ed\"" Sep 4 23:51:20.614498 systemd[1]: Started cri-containerd-bbb9c83bf303abf4d3aa6f5dfe967d47604f601dd402c97962ba5c13e975f7ed.scope - libcontainer container bbb9c83bf303abf4d3aa6f5dfe967d47604f601dd402c97962ba5c13e975f7ed. Sep 4 23:51:20.691309 containerd[1493]: time="2025-09-04T23:51:20.691249723Z" level=info msg="StartContainer for \"bbb9c83bf303abf4d3aa6f5dfe967d47604f601dd402c97962ba5c13e975f7ed\" returns successfully" Sep 4 23:51:21.492196 kubelet[2748]: I0904 23:51:21.491647 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78c45fc5f4-77ks8" podStartSLOduration=77.679211005 podStartE2EDuration="1m44.491598144s" podCreationTimestamp="2025-09-04 23:49:37 +0000 UTC" firstStartedPulling="2025-09-04 23:50:53.705305988 +0000 UTC m=+108.650389892" lastFinishedPulling="2025-09-04 23:51:20.517693127 +0000 UTC m=+135.462777031" observedRunningTime="2025-09-04 23:51:21.491206709 +0000 UTC m=+136.436290643" watchObservedRunningTime="2025-09-04 23:51:21.491598144 +0000 UTC m=+136.436682048" Sep 4 23:51:21.664586 kubelet[2748]: I0904 23:51:21.664415 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9fcb7bdb4-fjbtd" podStartSLOduration=74.162607947 podStartE2EDuration="1m40.664389382s" podCreationTimestamp="2025-09-04 23:49:41 +0000 UTC" firstStartedPulling="2025-09-04 23:50:52.197398414 +0000 UTC m=+107.142482318" lastFinishedPulling="2025-09-04 23:51:18.699179838 +0000 UTC m=+133.644263753" observedRunningTime="2025-09-04 23:51:21.663897506 +0000 UTC m=+136.608981410" watchObservedRunningTime="2025-09-04 23:51:21.664389382 +0000 UTC m=+136.609473306" Sep 4 23:51:23.458306 kubelet[2748]: I0904 23:51:23.457291 2748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 23:51:24.119430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2212230692.mount: Deactivated successfully. Sep 4 23:51:25.167444 containerd[1493]: time="2025-09-04T23:51:25.167343120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:25.188763 containerd[1493]: time="2025-09-04T23:51:25.188689970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 4 23:51:25.209780 containerd[1493]: time="2025-09-04T23:51:25.209423283Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:25.229059 containerd[1493]: time="2025-09-04T23:51:25.228964381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:25.231688 containerd[1493]: time="2025-09-04T23:51:25.229936548Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.709884976s" Sep 4 23:51:25.231688 containerd[1493]: time="2025-09-04T23:51:25.229969541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 4 23:51:25.232407 containerd[1493]: time="2025-09-04T23:51:25.232088760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 4 23:51:25.283311 containerd[1493]: time="2025-09-04T23:51:25.282668982Z" level=info msg="CreateContainer within sandbox \"2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 4 23:51:25.519938 systemd[1]: Started sshd@19-10.0.0.62:22-10.0.0.1:56720.service - OpenSSH per-connection server daemon (10.0.0.1:56720). Sep 4 23:51:25.824734 containerd[1493]: time="2025-09-04T23:51:25.824195342Z" level=info msg="CreateContainer within sandbox \"2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4c534df686967197dbe373b1104c7a69f7a564dd5ef020b9f86a567b0130311d\"" Sep 4 23:51:25.828937 containerd[1493]: time="2025-09-04T23:51:25.827296428Z" level=info msg="StartContainer for \"4c534df686967197dbe373b1104c7a69f7a564dd5ef020b9f86a567b0130311d\"" Sep 4 23:51:25.885090 sshd[6191]: Accepted publickey for core from 10.0.0.1 port 56720 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:51:25.899430 sshd-session[6191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:25.911489 systemd-logind[1475]: New session 20 of user core. Sep 4 23:51:25.937950 systemd[1]: Started cri-containerd-4c534df686967197dbe373b1104c7a69f7a564dd5ef020b9f86a567b0130311d.scope - libcontainer container 4c534df686967197dbe373b1104c7a69f7a564dd5ef020b9f86a567b0130311d. Sep 4 23:51:25.939834 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 23:51:26.035913 containerd[1493]: time="2025-09-04T23:51:26.035836651Z" level=info msg="StartContainer for \"4c534df686967197dbe373b1104c7a69f7a564dd5ef020b9f86a567b0130311d\" returns successfully" Sep 4 23:51:26.697411 containerd[1493]: time="2025-09-04T23:51:26.697091291Z" level=info msg="StopContainer for \"4c534df686967197dbe373b1104c7a69f7a564dd5ef020b9f86a567b0130311d\" with timeout 30 (s)" Sep 4 23:51:26.698107 containerd[1493]: time="2025-09-04T23:51:26.697523813Z" level=info msg="StopContainer for \"4bca5248225c2e308a0f30d362dcc30d40b8e4ad78bb3d808b27356abeec4edb\" with timeout 30 (s)" Sep 4 23:51:26.702587 containerd[1493]: time="2025-09-04T23:51:26.702538935Z" level=info msg="Stop container \"4bca5248225c2e308a0f30d362dcc30d40b8e4ad78bb3d808b27356abeec4edb\" with signal terminated" Sep 4 23:51:26.707329 containerd[1493]: time="2025-09-04T23:51:26.703036250Z" level=info msg="Stop container \"4c534df686967197dbe373b1104c7a69f7a564dd5ef020b9f86a567b0130311d\" with signal terminated" Sep 4 23:51:26.795516 systemd[1]: cri-containerd-4c534df686967197dbe373b1104c7a69f7a564dd5ef020b9f86a567b0130311d.scope: Deactivated successfully. Sep 4 23:51:26.821246 systemd[1]: cri-containerd-4bca5248225c2e308a0f30d362dcc30d40b8e4ad78bb3d808b27356abeec4edb.scope: Deactivated successfully. Sep 4 23:51:26.987505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4bca5248225c2e308a0f30d362dcc30d40b8e4ad78bb3d808b27356abeec4edb-rootfs.mount: Deactivated successfully. Sep 4 23:51:27.006273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c534df686967197dbe373b1104c7a69f7a564dd5ef020b9f86a567b0130311d-rootfs.mount: Deactivated successfully. Sep 4 23:51:27.026443 containerd[1493]: time="2025-09-04T23:51:26.994803167Z" level=info msg="shim disconnected" id=4bca5248225c2e308a0f30d362dcc30d40b8e4ad78bb3d808b27356abeec4edb namespace=k8s.io Sep 4 23:51:27.282412 containerd[1493]: time="2025-09-04T23:51:27.280934247Z" level=warning msg="cleaning up after shim disconnected" id=4bca5248225c2e308a0f30d362dcc30d40b8e4ad78bb3d808b27356abeec4edb namespace=k8s.io Sep 4 23:51:27.282412 containerd[1493]: time="2025-09-04T23:51:27.280991196Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:51:27.306812 containerd[1493]: time="2025-09-04T23:51:27.303287300Z" level=info msg="shim disconnected" id=4c534df686967197dbe373b1104c7a69f7a564dd5ef020b9f86a567b0130311d namespace=k8s.io Sep 4 23:51:27.306812 containerd[1493]: time="2025-09-04T23:51:27.303372281Z" level=warning msg="cleaning up after shim disconnected" id=4c534df686967197dbe373b1104c7a69f7a564dd5ef020b9f86a567b0130311d namespace=k8s.io Sep 4 23:51:27.306812 containerd[1493]: time="2025-09-04T23:51:27.303383673Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:51:27.361551 containerd[1493]: time="2025-09-04T23:51:27.359755289Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:51:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:51:27.661804 containerd[1493]: time="2025-09-04T23:51:27.610730263Z" level=info msg="StopContainer for \"4c534df686967197dbe373b1104c7a69f7a564dd5ef020b9f86a567b0130311d\" returns successfully" Sep 4 23:51:27.657511 systemd[1]: sshd@19-10.0.0.62:22-10.0.0.1:56720.service: Deactivated successfully. Sep 4 23:51:27.631191 sshd-session[6191]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:27.662594 sshd[6212]: Connection closed by 10.0.0.1 port 56720 Sep 4 23:51:27.668428 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 23:51:27.674807 systemd-logind[1475]: Session 20 logged out. Waiting for processes to exit. Sep 4 23:51:27.681993 systemd-logind[1475]: Removed session 20. Sep 4 23:51:28.349034 containerd[1493]: time="2025-09-04T23:51:28.348963586Z" level=info msg="StopContainer for \"4bca5248225c2e308a0f30d362dcc30d40b8e4ad78bb3d808b27356abeec4edb\" returns successfully" Sep 4 23:51:28.349697 containerd[1493]: time="2025-09-04T23:51:28.349597811Z" level=info msg="StopPodSandbox for \"2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01\"" Sep 4 23:51:28.358112 containerd[1493]: time="2025-09-04T23:51:28.355468848Z" level=info msg="Container to stop \"4bca5248225c2e308a0f30d362dcc30d40b8e4ad78bb3d808b27356abeec4edb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:51:28.358112 containerd[1493]: time="2025-09-04T23:51:28.358104136Z" level=info msg="Container to stop \"4c534df686967197dbe373b1104c7a69f7a564dd5ef020b9f86a567b0130311d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:51:28.362164 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01-shm.mount: Deactivated successfully. Sep 4 23:51:28.367425 systemd[1]: cri-containerd-2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01.scope: Deactivated successfully. Sep 4 23:51:28.393176 containerd[1493]: time="2025-09-04T23:51:28.390946967Z" level=info msg="shim disconnected" id=2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01 namespace=k8s.io Sep 4 23:51:28.393176 containerd[1493]: time="2025-09-04T23:51:28.391022069Z" level=warning msg="cleaning up after shim disconnected" id=2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01 namespace=k8s.io Sep 4 23:51:28.393176 containerd[1493]: time="2025-09-04T23:51:28.391032269Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:51:28.394144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01-rootfs.mount: Deactivated successfully. Sep 4 23:51:29.647263 kubelet[2748]: I0904 23:51:29.647222 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Sep 4 23:51:30.484031 kubelet[2748]: I0904 23:51:30.483944 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-754df77889-htb2w" podStartSLOduration=70.391080381 podStartE2EDuration="1m46.483920429s" podCreationTimestamp="2025-09-04 23:49:44 +0000 UTC" firstStartedPulling="2025-09-04 23:50:49.138482806 +0000 UTC m=+104.083566710" lastFinishedPulling="2025-09-04 23:51:25.231322854 +0000 UTC m=+140.176406758" observedRunningTime="2025-09-04 23:51:26.551984649 +0000 UTC m=+141.497068573" watchObservedRunningTime="2025-09-04 23:51:30.483920429 +0000 UTC m=+145.429004343" Sep 4 23:51:31.398398 systemd-networkd[1427]: cali7ad0bf8edad: Link DOWN Sep 4 23:51:31.398408 systemd-networkd[1427]: cali7ad0bf8edad: Lost carrier Sep 4 23:51:32.642736 systemd[1]: Started sshd@20-10.0.0.62:22-10.0.0.1:57188.service - OpenSSH per-connection server daemon (10.0.0.1:57188). Sep 4 23:51:32.704685 sshd[6404]: Accepted publickey for core from 10.0.0.1 port 57188 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:51:32.706527 sshd-session[6404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:32.711298 systemd-logind[1475]: New session 21 of user core. Sep 4 23:51:32.723537 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 23:51:33.911635 sshd[6406]: Connection closed by 10.0.0.1 port 57188 Sep 4 23:51:33.912041 sshd-session[6404]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:33.917912 systemd[1]: sshd@20-10.0.0.62:22-10.0.0.1:57188.service: Deactivated successfully. Sep 4 23:51:33.920225 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 23:51:33.920977 systemd-logind[1475]: Session 21 logged out. Waiting for processes to exit. Sep 4 23:51:33.921883 systemd-logind[1475]: Removed session 21. Sep 4 23:51:34.294321 containerd[1493]: 2025-09-04 23:51:31.388 [INFO][6348] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Sep 4 23:51:34.294321 containerd[1493]: 2025-09-04 23:51:31.394 [INFO][6348] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" iface="eth0" netns="/var/run/netns/cni-a35004e9-4712-c961-13c5-ce3c49d0c9d0" Sep 4 23:51:34.294321 containerd[1493]: 2025-09-04 23:51:31.394 [INFO][6348] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" iface="eth0" netns="/var/run/netns/cni-a35004e9-4712-c961-13c5-ce3c49d0c9d0" Sep 4 23:51:34.294321 containerd[1493]: 2025-09-04 23:51:31.407 [INFO][6348] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" after=12.9761ms iface="eth0" netns="/var/run/netns/cni-a35004e9-4712-c961-13c5-ce3c49d0c9d0" Sep 4 23:51:34.294321 containerd[1493]: 2025-09-04 23:51:31.407 [INFO][6348] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Sep 4 23:51:34.294321 containerd[1493]: 2025-09-04 23:51:31.407 [INFO][6348] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Sep 4 23:51:34.294321 containerd[1493]: 2025-09-04 23:51:31.502 [INFO][6390] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" HandleID="k8s-pod-network.2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Workload="localhost-k8s-whisker--754df77889--htb2w-eth0" Sep 4 23:51:34.294321 containerd[1493]: 2025-09-04 23:51:31.502 [INFO][6390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:51:34.294321 containerd[1493]: 2025-09-04 23:51:31.502 [INFO][6390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:51:34.294321 containerd[1493]: 2025-09-04 23:51:34.282 [INFO][6390] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" HandleID="k8s-pod-network.2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Workload="localhost-k8s-whisker--754df77889--htb2w-eth0" Sep 4 23:51:34.294321 containerd[1493]: 2025-09-04 23:51:34.282 [INFO][6390] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" HandleID="k8s-pod-network.2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Workload="localhost-k8s-whisker--754df77889--htb2w-eth0" Sep 4 23:51:34.294321 containerd[1493]: 2025-09-04 23:51:34.284 [INFO][6390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:51:34.294321 containerd[1493]: 2025-09-04 23:51:34.289 [INFO][6348] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Sep 4 23:51:34.308687 containerd[1493]: time="2025-09-04T23:51:34.295312870Z" level=info msg="TearDown network for sandbox \"2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01\" successfully" Sep 4 23:51:34.308687 containerd[1493]: time="2025-09-04T23:51:34.295375510Z" level=info msg="StopPodSandbox for \"2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01\" returns successfully" Sep 4 23:51:34.298932 systemd[1]: run-netns-cni\x2da35004e9\x2d4712\x2dc961\x2d13c5\x2dce3c49d0c9d0.mount: Deactivated successfully. Sep 4 23:51:34.396369 kubelet[2748]: I0904 23:51:34.396288 2748 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5s2k\" (UniqueName: \"kubernetes.io/projected/c4b9b516-8bab-431f-a718-d4b546f75053-kube-api-access-b5s2k\") pod \"c4b9b516-8bab-431f-a718-d4b546f75053\" (UID: \"c4b9b516-8bab-431f-a718-d4b546f75053\") " Sep 4 23:51:34.396369 kubelet[2748]: I0904 23:51:34.396386 2748 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c4b9b516-8bab-431f-a718-d4b546f75053-whisker-backend-key-pair\") pod \"c4b9b516-8bab-431f-a718-d4b546f75053\" (UID: \"c4b9b516-8bab-431f-a718-d4b546f75053\") " Sep 4 23:51:34.397106 kubelet[2748]: I0904 23:51:34.396424 2748 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4b9b516-8bab-431f-a718-d4b546f75053-whisker-ca-bundle\") pod \"c4b9b516-8bab-431f-a718-d4b546f75053\" (UID: \"c4b9b516-8bab-431f-a718-d4b546f75053\") " Sep 4 23:51:34.412444 systemd[1]: var-lib-kubelet-pods-c4b9b516\x2d8bab\x2d431f\x2da718\x2dd4b546f75053-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 4 23:51:34.416140 systemd[1]: var-lib-kubelet-pods-c4b9b516\x2d8bab\x2d431f\x2da718\x2dd4b546f75053-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db5s2k.mount: Deactivated successfully. Sep 4 23:51:34.422583 kubelet[2748]: I0904 23:51:34.421436 2748 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4b9b516-8bab-431f-a718-d4b546f75053-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c4b9b516-8bab-431f-a718-d4b546f75053" (UID: "c4b9b516-8bab-431f-a718-d4b546f75053"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 23:51:34.422759 kubelet[2748]: I0904 23:51:34.420591 2748 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4b9b516-8bab-431f-a718-d4b546f75053-kube-api-access-b5s2k" (OuterVolumeSpecName: "kube-api-access-b5s2k") pod "c4b9b516-8bab-431f-a718-d4b546f75053" (UID: "c4b9b516-8bab-431f-a718-d4b546f75053"). InnerVolumeSpecName "kube-api-access-b5s2k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:51:34.422759 kubelet[2748]: I0904 23:51:34.421807 2748 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4b9b516-8bab-431f-a718-d4b546f75053-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c4b9b516-8bab-431f-a718-d4b546f75053" (UID: "c4b9b516-8bab-431f-a718-d4b546f75053"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:51:34.500721 kubelet[2748]: I0904 23:51:34.500656 2748 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c4b9b516-8bab-431f-a718-d4b546f75053-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 4 23:51:34.500721 kubelet[2748]: I0904 23:51:34.500724 2748 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4b9b516-8bab-431f-a718-d4b546f75053-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 4 23:51:34.500721 kubelet[2748]: I0904 23:51:34.500736 2748 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b5s2k\" (UniqueName: \"kubernetes.io/projected/c4b9b516-8bab-431f-a718-d4b546f75053-kube-api-access-b5s2k\") on node \"localhost\" DevicePath \"\"" Sep 4 23:51:34.697078 systemd[1]: Removed slice kubepods-besteffort-podc4b9b516_8bab_431f_a718_d4b546f75053.slice - libcontainer container kubepods-besteffort-podc4b9b516_8bab_431f_a718_d4b546f75053.slice. Sep 4 23:51:35.335581 containerd[1493]: time="2025-09-04T23:51:35.335475842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:35.419158 containerd[1493]: time="2025-09-04T23:51:35.419053953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 4 23:51:35.478171 containerd[1493]: time="2025-09-04T23:51:35.477968101Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:35.800909 containerd[1493]: time="2025-09-04T23:51:35.788177448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:35.800909 containerd[1493]: time="2025-09-04T23:51:35.789269201Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 10.557144472s" Sep 4 23:51:35.800909 containerd[1493]: time="2025-09-04T23:51:35.789330707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 4 23:51:35.907734 containerd[1493]: time="2025-09-04T23:51:35.907674493Z" level=info msg="CreateContainer within sandbox \"c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 23:51:36.526106 containerd[1493]: time="2025-09-04T23:51:36.526010709Z" level=info msg="CreateContainer within sandbox \"c1946a2b3943dd6d5e12bafdf5dcbd2220db5c09d08bb6445e7f02e25685bb61\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5988b69bb7bea5278107051895917ce2fe1f2f83ff1a79cc24c241bfcb03980b\"" Sep 4 23:51:36.526782 containerd[1493]: time="2025-09-04T23:51:36.526737149Z" level=info msg="StartContainer for \"5988b69bb7bea5278107051895917ce2fe1f2f83ff1a79cc24c241bfcb03980b\"" Sep 4 23:51:36.567360 systemd[1]: Started cri-containerd-5988b69bb7bea5278107051895917ce2fe1f2f83ff1a79cc24c241bfcb03980b.scope - libcontainer container 5988b69bb7bea5278107051895917ce2fe1f2f83ff1a79cc24c241bfcb03980b. Sep 4 23:51:36.709028 containerd[1493]: time="2025-09-04T23:51:36.708969933Z" level=info msg="StartContainer for \"5988b69bb7bea5278107051895917ce2fe1f2f83ff1a79cc24c241bfcb03980b\" returns successfully" Sep 4 23:51:37.159669 kubelet[2748]: I0904 23:51:37.159567 2748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4b9b516-8bab-431f-a718-d4b546f75053" path="/var/lib/kubelet/pods/c4b9b516-8bab-431f-a718-d4b546f75053/volumes" Sep 4 23:51:37.513345 kubelet[2748]: I0904 23:51:37.513255 2748 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 23:51:37.518204 kubelet[2748]: I0904 23:51:37.518148 2748 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 23:51:38.939735 systemd[1]: Started sshd@21-10.0.0.62:22-10.0.0.1:57190.service - OpenSSH per-connection server daemon (10.0.0.1:57190). Sep 4 23:51:39.095764 sshd[6471]: Accepted publickey for core from 10.0.0.1 port 57190 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:51:39.099009 sshd-session[6471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:39.107156 systemd-logind[1475]: New session 22 of user core. Sep 4 23:51:39.116371 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 23:51:39.659557 sshd[6476]: Connection closed by 10.0.0.1 port 57190 Sep 4 23:51:39.661809 sshd-session[6471]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:39.673207 systemd[1]: sshd@21-10.0.0.62:22-10.0.0.1:57190.service: Deactivated successfully. Sep 4 23:51:39.676294 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 23:51:39.677475 systemd-logind[1475]: Session 22 logged out. Waiting for processes to exit. Sep 4 23:51:39.684763 systemd[1]: Started sshd@22-10.0.0.62:22-10.0.0.1:57206.service - OpenSSH per-connection server daemon (10.0.0.1:57206). Sep 4 23:51:39.685564 systemd-logind[1475]: Removed session 22. Sep 4 23:51:39.754800 sshd[6488]: Accepted publickey for core from 10.0.0.1 port 57206 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:51:39.757290 sshd-session[6488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:39.763115 systemd-logind[1475]: New session 23 of user core. Sep 4 23:51:39.769281 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 23:51:41.465707 sshd[6491]: Connection closed by 10.0.0.1 port 57206 Sep 4 23:51:41.467641 sshd-session[6488]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:41.483400 systemd[1]: sshd@22-10.0.0.62:22-10.0.0.1:57206.service: Deactivated successfully. Sep 4 23:51:41.486023 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 23:51:41.487352 systemd-logind[1475]: Session 23 logged out. Waiting for processes to exit. Sep 4 23:51:41.497533 systemd[1]: Started sshd@23-10.0.0.62:22-10.0.0.1:40494.service - OpenSSH per-connection server daemon (10.0.0.1:40494). Sep 4 23:51:41.499090 systemd-logind[1475]: Removed session 23. Sep 4 23:51:41.537961 sshd[6504]: Accepted publickey for core from 10.0.0.1 port 40494 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:51:41.540058 sshd-session[6504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:41.544966 systemd-logind[1475]: New session 24 of user core. Sep 4 23:51:41.554313 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 23:51:48.087931 kubelet[2748]: I0904 23:51:48.087256 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-28pvw" podStartSLOduration=83.08669386 podStartE2EDuration="2m7.087235541s" podCreationTimestamp="2025-09-04 23:49:41 +0000 UTC" firstStartedPulling="2025-09-04 23:50:51.789795958 +0000 UTC m=+106.734879862" lastFinishedPulling="2025-09-04 23:51:35.790337639 +0000 UTC m=+150.735421543" observedRunningTime="2025-09-04 23:51:36.785884757 +0000 UTC m=+151.730968681" watchObservedRunningTime="2025-09-04 23:51:48.087235541 +0000 UTC m=+163.032319455" Sep 4 23:51:49.153909 kubelet[2748]: E0904 23:51:49.153826 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:49.154549 kubelet[2748]: E0904 23:51:49.153839 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:50.249338 sshd[6507]: Connection closed by 10.0.0.1 port 40494 Sep 4 23:51:50.251044 sshd-session[6504]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:50.264915 systemd[1]: sshd@23-10.0.0.62:22-10.0.0.1:40494.service: Deactivated successfully. Sep 4 23:51:50.268809 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 23:51:50.272263 systemd-logind[1475]: Session 24 logged out. Waiting for processes to exit. Sep 4 23:51:50.278708 systemd[1]: Started sshd@24-10.0.0.62:22-10.0.0.1:33130.service - OpenSSH per-connection server daemon (10.0.0.1:33130). Sep 4 23:51:50.280276 systemd-logind[1475]: Removed session 24. Sep 4 23:51:50.340012 sshd[6556]: Accepted publickey for core from 10.0.0.1 port 33130 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:51:50.343001 sshd-session[6556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:50.351715 systemd-logind[1475]: New session 25 of user core. Sep 4 23:51:50.370469 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 23:51:51.651383 sshd[6559]: Connection closed by 10.0.0.1 port 33130 Sep 4 23:51:51.665937 sshd-session[6556]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:51.721551 systemd[1]: sshd@24-10.0.0.62:22-10.0.0.1:33130.service: Deactivated successfully. Sep 4 23:51:51.725589 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 23:51:51.735188 systemd-logind[1475]: Session 25 logged out. Waiting for processes to exit. Sep 4 23:51:51.756916 systemd[1]: Started sshd@25-10.0.0.62:22-10.0.0.1:33146.service - OpenSSH per-connection server daemon (10.0.0.1:33146). Sep 4 23:51:51.759275 systemd-logind[1475]: Removed session 25. Sep 4 23:51:51.847201 sshd[6589]: Accepted publickey for core from 10.0.0.1 port 33146 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:51:51.854916 sshd-session[6589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:51.867135 systemd-logind[1475]: New session 26 of user core. Sep 4 23:51:51.881545 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 23:51:52.520996 sshd[6596]: Connection closed by 10.0.0.1 port 33146 Sep 4 23:51:52.527410 sshd-session[6589]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:52.538021 systemd-logind[1475]: Session 26 logged out. Waiting for processes to exit. Sep 4 23:51:52.542444 systemd[1]: sshd@25-10.0.0.62:22-10.0.0.1:33146.service: Deactivated successfully. Sep 4 23:51:52.552032 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 23:51:52.564723 systemd-logind[1475]: Removed session 26. Sep 4 23:51:57.152305 kubelet[2748]: E0904 23:51:57.152248 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:57.535623 systemd[1]: Started sshd@26-10.0.0.62:22-10.0.0.1:33154.service - OpenSSH per-connection server daemon (10.0.0.1:33154). Sep 4 23:51:57.626451 sshd[6631]: Accepted publickey for core from 10.0.0.1 port 33154 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:51:57.632548 sshd-session[6631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:57.643709 systemd-logind[1475]: New session 27 of user core. Sep 4 23:51:57.656422 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 23:51:58.388170 sshd[6633]: Connection closed by 10.0.0.1 port 33154 Sep 4 23:51:58.409918 sshd-session[6631]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:58.415607 systemd[1]: sshd@26-10.0.0.62:22-10.0.0.1:33154.service: Deactivated successfully. Sep 4 23:51:58.418273 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 23:51:58.419184 systemd-logind[1475]: Session 27 logged out. Waiting for processes to exit. Sep 4 23:51:58.420545 systemd-logind[1475]: Removed session 27. Sep 4 23:52:03.407561 systemd[1]: Started sshd@27-10.0.0.62:22-10.0.0.1:43496.service - OpenSSH per-connection server daemon (10.0.0.1:43496). Sep 4 23:52:03.448362 sshd[6667]: Accepted publickey for core from 10.0.0.1 port 43496 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:52:03.450870 sshd-session[6667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:03.455982 systemd-logind[1475]: New session 28 of user core. Sep 4 23:52:03.461377 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 23:52:03.769930 sshd[6669]: Connection closed by 10.0.0.1 port 43496 Sep 4 23:52:03.770415 sshd-session[6667]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:03.775920 systemd-logind[1475]: Session 28 logged out. Waiting for processes to exit. Sep 4 23:52:03.776786 systemd[1]: sshd@27-10.0.0.62:22-10.0.0.1:43496.service: Deactivated successfully. Sep 4 23:52:03.781663 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 23:52:03.785333 systemd-logind[1475]: Removed session 28. Sep 4 23:52:05.152098 kubelet[2748]: E0904 23:52:05.152049 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:07.797701 kubelet[2748]: I0904 23:52:07.797610 2748 scope.go:117] "RemoveContainer" containerID="4bca5248225c2e308a0f30d362dcc30d40b8e4ad78bb3d808b27356abeec4edb" Sep 4 23:52:07.808914 containerd[1493]: time="2025-09-04T23:52:07.808842854Z" level=info msg="RemoveContainer for \"4bca5248225c2e308a0f30d362dcc30d40b8e4ad78bb3d808b27356abeec4edb\"" Sep 4 23:52:08.039309 containerd[1493]: time="2025-09-04T23:52:08.039231864Z" level=info msg="RemoveContainer for \"4bca5248225c2e308a0f30d362dcc30d40b8e4ad78bb3d808b27356abeec4edb\" returns successfully" Sep 4 23:52:08.051762 kubelet[2748]: I0904 23:52:08.051580 2748 scope.go:117] "RemoveContainer" containerID="4c534df686967197dbe373b1104c7a69f7a564dd5ef020b9f86a567b0130311d" Sep 4 23:52:08.057207 containerd[1493]: time="2025-09-04T23:52:08.055490454Z" level=info msg="RemoveContainer for \"4c534df686967197dbe373b1104c7a69f7a564dd5ef020b9f86a567b0130311d\"" Sep 4 23:52:08.221144 containerd[1493]: time="2025-09-04T23:52:08.221078234Z" level=info msg="RemoveContainer for \"4c534df686967197dbe373b1104c7a69f7a564dd5ef020b9f86a567b0130311d\" returns successfully" Sep 4 23:52:08.222748 containerd[1493]: time="2025-09-04T23:52:08.222698733Z" level=info msg="StopPodSandbox for \"2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01\"" Sep 4 23:52:08.784796 systemd[1]: Started sshd@28-10.0.0.62:22-10.0.0.1:43508.service - OpenSSH per-connection server daemon (10.0.0.1:43508). Sep 4 23:52:08.894357 sshd[6697]: Accepted publickey for core from 10.0.0.1 port 43508 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:52:08.896558 sshd-session[6697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:08.901488 systemd-logind[1475]: New session 29 of user core. Sep 4 23:52:08.909410 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 4 23:52:09.112437 sshd[6703]: Connection closed by 10.0.0.1 port 43508 Sep 4 23:52:09.113541 sshd-session[6697]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:09.118643 systemd[1]: sshd@28-10.0.0.62:22-10.0.0.1:43508.service: Deactivated successfully. Sep 4 23:52:09.123925 systemd[1]: session-29.scope: Deactivated successfully. Sep 4 23:52:09.124860 systemd-logind[1475]: Session 29 logged out. Waiting for processes to exit. Sep 4 23:52:09.125926 systemd-logind[1475]: Removed session 29. Sep 4 23:52:09.657339 containerd[1493]: 2025-09-04 23:52:09.208 [WARNING][6695] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" WorkloadEndpoint="localhost-k8s-whisker--754df77889--htb2w-eth0" Sep 4 23:52:09.657339 containerd[1493]: 2025-09-04 23:52:09.227 [INFO][6695] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Sep 4 23:52:09.657339 containerd[1493]: 2025-09-04 23:52:09.227 [INFO][6695] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" iface="eth0" netns="" Sep 4 23:52:09.657339 containerd[1493]: 2025-09-04 23:52:09.227 [INFO][6695] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Sep 4 23:52:09.657339 containerd[1493]: 2025-09-04 23:52:09.227 [INFO][6695] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Sep 4 23:52:09.657339 containerd[1493]: 2025-09-04 23:52:09.382 [INFO][6719] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" HandleID="k8s-pod-network.2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Workload="localhost-k8s-whisker--754df77889--htb2w-eth0" Sep 4 23:52:09.657339 containerd[1493]: 2025-09-04 23:52:09.382 [INFO][6719] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:52:09.657339 containerd[1493]: 2025-09-04 23:52:09.382 [INFO][6719] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:52:09.657339 containerd[1493]: 2025-09-04 23:52:09.648 [WARNING][6719] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" HandleID="k8s-pod-network.2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Workload="localhost-k8s-whisker--754df77889--htb2w-eth0" Sep 4 23:52:09.657339 containerd[1493]: 2025-09-04 23:52:09.648 [INFO][6719] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" HandleID="k8s-pod-network.2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Workload="localhost-k8s-whisker--754df77889--htb2w-eth0" Sep 4 23:52:09.657339 containerd[1493]: 2025-09-04 23:52:09.650 [INFO][6719] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:52:09.657339 containerd[1493]: 2025-09-04 23:52:09.653 [INFO][6695] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Sep 4 23:52:09.657339 containerd[1493]: time="2025-09-04T23:52:09.657179041Z" level=info msg="TearDown network for sandbox \"2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01\" successfully" Sep 4 23:52:09.657339 containerd[1493]: time="2025-09-04T23:52:09.657222283Z" level=info msg="StopPodSandbox for \"2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01\" returns successfully" Sep 4 23:52:09.658595 containerd[1493]: time="2025-09-04T23:52:09.657901268Z" level=info msg="RemovePodSandbox for \"2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01\"" Sep 4 23:52:09.658595 containerd[1493]: time="2025-09-04T23:52:09.657952305Z" level=info msg="Forcibly stopping sandbox \"2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01\"" Sep 4 23:52:09.942781 containerd[1493]: 2025-09-04 23:52:09.717 [WARNING][6738] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" WorkloadEndpoint="localhost-k8s-whisker--754df77889--htb2w-eth0" Sep 4 23:52:09.942781 containerd[1493]: 2025-09-04 23:52:09.717 [INFO][6738] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Sep 4 23:52:09.942781 containerd[1493]: 2025-09-04 23:52:09.717 [INFO][6738] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" iface="eth0" netns="" Sep 4 23:52:09.942781 containerd[1493]: 2025-09-04 23:52:09.717 [INFO][6738] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Sep 4 23:52:09.942781 containerd[1493]: 2025-09-04 23:52:09.717 [INFO][6738] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Sep 4 23:52:09.942781 containerd[1493]: 2025-09-04 23:52:09.750 [INFO][6746] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" HandleID="k8s-pod-network.2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Workload="localhost-k8s-whisker--754df77889--htb2w-eth0" Sep 4 23:52:09.942781 containerd[1493]: 2025-09-04 23:52:09.750 [INFO][6746] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:52:09.942781 containerd[1493]: 2025-09-04 23:52:09.750 [INFO][6746] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:52:09.942781 containerd[1493]: 2025-09-04 23:52:09.933 [WARNING][6746] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" HandleID="k8s-pod-network.2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Workload="localhost-k8s-whisker--754df77889--htb2w-eth0" Sep 4 23:52:09.942781 containerd[1493]: 2025-09-04 23:52:09.933 [INFO][6746] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" HandleID="k8s-pod-network.2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Workload="localhost-k8s-whisker--754df77889--htb2w-eth0" Sep 4 23:52:09.942781 containerd[1493]: 2025-09-04 23:52:09.935 [INFO][6746] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:52:09.942781 containerd[1493]: 2025-09-04 23:52:09.938 [INFO][6738] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01" Sep 4 23:52:09.947859 containerd[1493]: time="2025-09-04T23:52:09.942838272Z" level=info msg="TearDown network for sandbox \"2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01\" successfully" Sep 4 23:52:10.683914 containerd[1493]: time="2025-09-04T23:52:10.683825334Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:10.684636 containerd[1493]: time="2025-09-04T23:52:10.683970649Z" level=info msg="RemovePodSandbox \"2992be279fa9c04cc1e1721d982e2ffc040b34d781145eea8946f9296bc42f01\" returns successfully" Sep 4 23:52:12.151223 kubelet[2748]: E0904 23:52:12.151108 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:14.137509 systemd[1]: Started sshd@29-10.0.0.62:22-10.0.0.1:51586.service - OpenSSH per-connection server daemon (10.0.0.1:51586). Sep 4 23:52:14.151317 kubelet[2748]: E0904 23:52:14.151225 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:14.222336 sshd[6765]: Accepted publickey for core from 10.0.0.1 port 51586 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:52:14.224545 sshd-session[6765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:14.230393 systemd-logind[1475]: New session 30 of user core. Sep 4 23:52:14.239405 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 4 23:52:14.493048 sshd[6767]: Connection closed by 10.0.0.1 port 51586 Sep 4 23:52:14.493508 sshd-session[6765]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:14.498922 systemd[1]: sshd@29-10.0.0.62:22-10.0.0.1:51586.service: Deactivated successfully. Sep 4 23:52:14.501674 systemd[1]: session-30.scope: Deactivated successfully. Sep 4 23:52:14.502643 systemd-logind[1475]: Session 30 logged out. Waiting for processes to exit. Sep 4 23:52:14.503794 systemd-logind[1475]: Removed session 30. Sep 4 23:52:19.152152 kubelet[2748]: E0904 23:52:19.151684 2748 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:19.507498 systemd[1]: Started sshd@30-10.0.0.62:22-10.0.0.1:51594.service - OpenSSH per-connection server daemon (10.0.0.1:51594). Sep 4 23:52:19.583956 sshd[6828]: Accepted publickey for core from 10.0.0.1 port 51594 ssh2: RSA SHA256:KkidQ30CTGULlu2rLm46i6EZ+D0nGx2BTuiOw+G0GXs Sep 4 23:52:19.586861 sshd-session[6828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:19.593064 systemd-logind[1475]: New session 31 of user core. Sep 4 23:52:19.602496 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 4 23:52:19.830318 sshd[6830]: Connection closed by 10.0.0.1 port 51594 Sep 4 23:52:19.832490 sshd-session[6828]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:19.838035 systemd[1]: sshd@30-10.0.0.62:22-10.0.0.1:51594.service: Deactivated successfully. Sep 4 23:52:19.841643 systemd[1]: session-31.scope: Deactivated successfully. Sep 4 23:52:19.842794 systemd-logind[1475]: Session 31 logged out. Waiting for processes to exit. Sep 4 23:52:19.844205 systemd-logind[1475]: Removed session 31.